Test Report: KVM_Linux_crio 20501

                    
                      4595c49781c9e25c283632264448e235cf0fce36:2025-04-09:39062
                    
                

Test fail (10/220)

x
+
TestAddons/parallel/Ingress (154.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-355098 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-355098 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-355098 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [1016ffa1-7455-4039-9721-439abd6919c0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [1016ffa1-7455-4039-9721-439abd6919c0] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.002638405s
I0408 22:48:47.786477   16314 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-355098 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-355098 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.255881902s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-355098 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-355098 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.199
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-355098 -n addons-355098
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-355098 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-355098 logs -n 25: (1.131540211s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-967138                                                                     | download-only-967138 | jenkins | v1.35.0 | 08 Apr 25 22:45 UTC | 08 Apr 25 22:45 UTC |
	| delete  | -p download-only-681558                                                                     | download-only-681558 | jenkins | v1.35.0 | 08 Apr 25 22:45 UTC | 08 Apr 25 22:45 UTC |
	| delete  | -p download-only-967138                                                                     | download-only-967138 | jenkins | v1.35.0 | 08 Apr 25 22:45 UTC | 08 Apr 25 22:45 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-634721 | jenkins | v1.35.0 | 08 Apr 25 22:45 UTC |                     |
	|         | binary-mirror-634721                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:33783                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-634721                                                                     | binary-mirror-634721 | jenkins | v1.35.0 | 08 Apr 25 22:45 UTC | 08 Apr 25 22:45 UTC |
	| addons  | disable dashboard -p                                                                        | addons-355098        | jenkins | v1.35.0 | 08 Apr 25 22:45 UTC |                     |
	|         | addons-355098                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-355098        | jenkins | v1.35.0 | 08 Apr 25 22:45 UTC |                     |
	|         | addons-355098                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-355098 --wait=true                                                                | addons-355098        | jenkins | v1.35.0 | 08 Apr 25 22:45 UTC | 08 Apr 25 22:47 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-355098 addons disable                                                                | addons-355098        | jenkins | v1.35.0 | 08 Apr 25 22:47 UTC | 08 Apr 25 22:47 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-355098 addons disable                                                                | addons-355098        | jenkins | v1.35.0 | 08 Apr 25 22:48 UTC | 08 Apr 25 22:48 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-355098        | jenkins | v1.35.0 | 08 Apr 25 22:48 UTC | 08 Apr 25 22:48 UTC |
	|         | -p addons-355098                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-355098 addons                                                                        | addons-355098        | jenkins | v1.35.0 | 08 Apr 25 22:48 UTC | 08 Apr 25 22:48 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-355098 addons                                                                        | addons-355098        | jenkins | v1.35.0 | 08 Apr 25 22:48 UTC | 08 Apr 25 22:48 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-355098 addons                                                                        | addons-355098        | jenkins | v1.35.0 | 08 Apr 25 22:48 UTC | 08 Apr 25 22:48 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-355098 addons disable                                                                | addons-355098        | jenkins | v1.35.0 | 08 Apr 25 22:48 UTC | 08 Apr 25 22:48 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-355098 ip                                                                            | addons-355098        | jenkins | v1.35.0 | 08 Apr 25 22:48 UTC | 08 Apr 25 22:48 UTC |
	| addons  | addons-355098 addons disable                                                                | addons-355098        | jenkins | v1.35.0 | 08 Apr 25 22:48 UTC | 08 Apr 25 22:48 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-355098 addons                                                                        | addons-355098        | jenkins | v1.35.0 | 08 Apr 25 22:48 UTC | 08 Apr 25 22:48 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-355098 ssh cat                                                                       | addons-355098        | jenkins | v1.35.0 | 08 Apr 25 22:48 UTC | 08 Apr 25 22:48 UTC |
	|         | /opt/local-path-provisioner/pvc-fb751989-87e0-4024-b7d5-3cb6b29c4ba8_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-355098 addons disable                                                                | addons-355098        | jenkins | v1.35.0 | 08 Apr 25 22:48 UTC | 08 Apr 25 22:49 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-355098 addons disable                                                                | addons-355098        | jenkins | v1.35.0 | 08 Apr 25 22:48 UTC | 08 Apr 25 22:48 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-355098 ssh curl -s                                                                   | addons-355098        | jenkins | v1.35.0 | 08 Apr 25 22:48 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-355098 addons                                                                        | addons-355098        | jenkins | v1.35.0 | 08 Apr 25 22:49 UTC | 08 Apr 25 22:49 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-355098 addons                                                                        | addons-355098        | jenkins | v1.35.0 | 08 Apr 25 22:49 UTC | 08 Apr 25 22:49 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-355098 ip                                                                            | addons-355098        | jenkins | v1.35.0 | 08 Apr 25 22:51 UTC | 08 Apr 25 22:51 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/08 22:45:43
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 22:45:43.155638   16992 out.go:345] Setting OutFile to fd 1 ...
	I0408 22:45:43.155918   16992 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 22:45:43.155928   16992 out.go:358] Setting ErrFile to fd 2...
	I0408 22:45:43.155933   16992 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 22:45:43.156123   16992 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20501-9125/.minikube/bin
	I0408 22:45:43.156684   16992 out.go:352] Setting JSON to false
	I0408 22:45:43.157501   16992 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1688,"bootTime":1744150655,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 22:45:43.157579   16992 start.go:139] virtualization: kvm guest
	I0408 22:45:43.159772   16992 out.go:177] * [addons-355098] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 22:45:43.161386   16992 out.go:177]   - MINIKUBE_LOCATION=20501
	I0408 22:45:43.161406   16992 notify.go:220] Checking for updates...
	I0408 22:45:43.164138   16992 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 22:45:43.165612   16992 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20501-9125/kubeconfig
	I0408 22:45:43.166927   16992 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20501-9125/.minikube
	I0408 22:45:43.168281   16992 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 22:45:43.169614   16992 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 22:45:43.170996   16992 driver.go:404] Setting default libvirt URI to qemu:///system
	I0408 22:45:43.201663   16992 out.go:177] * Using the kvm2 driver based on user configuration
	I0408 22:45:43.202790   16992 start.go:297] selected driver: kvm2
	I0408 22:45:43.202804   16992 start.go:901] validating driver "kvm2" against <nil>
	I0408 22:45:43.202817   16992 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 22:45:43.203734   16992 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 22:45:43.203831   16992 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20501-9125/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 22:45:43.218109   16992 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0408 22:45:43.218144   16992 start_flags.go:311] no existing cluster config was found, will generate one from the flags 
	I0408 22:45:43.218353   16992 start_flags.go:975] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 22:45:43.218382   16992 cni.go:84] Creating CNI manager for ""
	I0408 22:45:43.218422   16992 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 22:45:43.218430   16992 start_flags.go:320] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 22:45:43.218470   16992 start.go:340] cluster config:
	{Name:addons-355098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-355098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 22:45:43.218558   16992 iso.go:125] acquiring lock: {Name:mk618477bad490b102618c53c9c8c6b34f33ce81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 22:45:43.220249   16992 out.go:177] * Starting "addons-355098" primary control-plane node in "addons-355098" cluster
	I0408 22:45:43.221387   16992 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0408 22:45:43.221424   16992 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20501-9125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0408 22:45:43.221434   16992 cache.go:56] Caching tarball of preloaded images
	I0408 22:45:43.221519   16992 preload.go:172] Found /home/jenkins/minikube-integration/20501-9125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 22:45:43.221532   16992 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0408 22:45:43.221841   16992 profile.go:143] Saving config to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/config.json ...
	I0408 22:45:43.221868   16992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/config.json: {Name:mk40641c7e62dd2bfcecbfaa9ec118b972d756c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 22:45:43.222023   16992 start.go:360] acquireMachinesLock for addons-355098: {Name:mke7be7b51cfddf557a39ecf6493fff6a1168ec9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 22:45:43.222087   16992 start.go:364] duration metric: took 45.986µs to acquireMachinesLock for "addons-355098"
	I0408 22:45:43.222113   16992 start.go:93] Provisioning new machine with config: &{Name:addons-355098 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:addons-355098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 22:45:43.222180   16992 start.go:125] createHost starting for "" (driver="kvm2")
	I0408 22:45:43.223708   16992 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0408 22:45:43.223851   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:45:43.223920   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:45:43.237819   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44083
	I0408 22:45:43.238340   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:45:43.238867   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:45:43.238887   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:45:43.239271   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:45:43.239437   16992 main.go:141] libmachine: (addons-355098) Calling .GetMachineName
	I0408 22:45:43.239565   16992 main.go:141] libmachine: (addons-355098) Calling .DriverName
	I0408 22:45:43.239742   16992 start.go:159] libmachine.API.Create for "addons-355098" (driver="kvm2")
	I0408 22:45:43.239766   16992 client.go:168] LocalClient.Create starting
	I0408 22:45:43.239798   16992 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem
	I0408 22:45:43.401305   16992 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem
	I0408 22:45:43.508084   16992 main.go:141] libmachine: Running pre-create checks...
	I0408 22:45:43.508106   16992 main.go:141] libmachine: (addons-355098) Calling .PreCreateCheck
	I0408 22:45:43.508613   16992 main.go:141] libmachine: (addons-355098) Calling .GetConfigRaw
	I0408 22:45:43.509091   16992 main.go:141] libmachine: Creating machine...
	I0408 22:45:43.509105   16992 main.go:141] libmachine: (addons-355098) Calling .Create
	I0408 22:45:43.509289   16992 main.go:141] libmachine: (addons-355098) creating KVM machine...
	I0408 22:45:43.509310   16992 main.go:141] libmachine: (addons-355098) creating network...
	I0408 22:45:43.510587   16992 main.go:141] libmachine: (addons-355098) DBG | found existing default KVM network
	I0408 22:45:43.511246   16992 main.go:141] libmachine: (addons-355098) DBG | I0408 22:45:43.511101   17014 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001165c0}
	I0408 22:45:43.511274   16992 main.go:141] libmachine: (addons-355098) DBG | created network xml: 
	I0408 22:45:43.511286   16992 main.go:141] libmachine: (addons-355098) DBG | <network>
	I0408 22:45:43.511298   16992 main.go:141] libmachine: (addons-355098) DBG |   <name>mk-addons-355098</name>
	I0408 22:45:43.511306   16992 main.go:141] libmachine: (addons-355098) DBG |   <dns enable='no'/>
	I0408 22:45:43.511329   16992 main.go:141] libmachine: (addons-355098) DBG |   
	I0408 22:45:43.511345   16992 main.go:141] libmachine: (addons-355098) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0408 22:45:43.511355   16992 main.go:141] libmachine: (addons-355098) DBG |     <dhcp>
	I0408 22:45:43.511366   16992 main.go:141] libmachine: (addons-355098) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0408 22:45:43.511376   16992 main.go:141] libmachine: (addons-355098) DBG |     </dhcp>
	I0408 22:45:43.511387   16992 main.go:141] libmachine: (addons-355098) DBG |   </ip>
	I0408 22:45:43.511399   16992 main.go:141] libmachine: (addons-355098) DBG |   
	I0408 22:45:43.511417   16992 main.go:141] libmachine: (addons-355098) DBG | </network>
	I0408 22:45:43.511432   16992 main.go:141] libmachine: (addons-355098) DBG | 
	I0408 22:45:43.517058   16992 main.go:141] libmachine: (addons-355098) DBG | trying to create private KVM network mk-addons-355098 192.168.39.0/24...
	I0408 22:45:43.581848   16992 main.go:141] libmachine: (addons-355098) DBG | private KVM network mk-addons-355098 192.168.39.0/24 created
	I0408 22:45:43.581882   16992 main.go:141] libmachine: (addons-355098) DBG | I0408 22:45:43.581800   17014 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20501-9125/.minikube
	I0408 22:45:43.581894   16992 main.go:141] libmachine: (addons-355098) setting up store path in /home/jenkins/minikube-integration/20501-9125/.minikube/machines/addons-355098 ...
	I0408 22:45:43.581910   16992 main.go:141] libmachine: (addons-355098) building disk image from file:///home/jenkins/minikube-integration/20501-9125/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0408 22:45:43.581974   16992 main.go:141] libmachine: (addons-355098) Downloading /home/jenkins/minikube-integration/20501-9125/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20501-9125/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0408 22:45:43.854103   16992 main.go:141] libmachine: (addons-355098) DBG | I0408 22:45:43.853952   17014 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/addons-355098/id_rsa...
	I0408 22:45:44.037749   16992 main.go:141] libmachine: (addons-355098) DBG | I0408 22:45:44.037634   17014 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/addons-355098/addons-355098.rawdisk...
	I0408 22:45:44.037785   16992 main.go:141] libmachine: (addons-355098) DBG | Writing magic tar header
	I0408 22:45:44.037799   16992 main.go:141] libmachine: (addons-355098) DBG | Writing SSH key tar header
	I0408 22:45:44.037810   16992 main.go:141] libmachine: (addons-355098) DBG | I0408 22:45:44.037750   17014 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20501-9125/.minikube/machines/addons-355098 ...
	I0408 22:45:44.037824   16992 main.go:141] libmachine: (addons-355098) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/addons-355098
	I0408 22:45:44.037867   16992 main.go:141] libmachine: (addons-355098) setting executable bit set on /home/jenkins/minikube-integration/20501-9125/.minikube/machines/addons-355098 (perms=drwx------)
	I0408 22:45:44.037893   16992 main.go:141] libmachine: (addons-355098) setting executable bit set on /home/jenkins/minikube-integration/20501-9125/.minikube/machines (perms=drwxr-xr-x)
	I0408 22:45:44.037988   16992 main.go:141] libmachine: (addons-355098) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20501-9125/.minikube/machines
	I0408 22:45:44.038040   16992 main.go:141] libmachine: (addons-355098) setting executable bit set on /home/jenkins/minikube-integration/20501-9125/.minikube (perms=drwxr-xr-x)
	I0408 22:45:44.038058   16992 main.go:141] libmachine: (addons-355098) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20501-9125/.minikube
	I0408 22:45:44.038072   16992 main.go:141] libmachine: (addons-355098) setting executable bit set on /home/jenkins/minikube-integration/20501-9125 (perms=drwxrwxr-x)
	I0408 22:45:44.038085   16992 main.go:141] libmachine: (addons-355098) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0408 22:45:44.038098   16992 main.go:141] libmachine: (addons-355098) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20501-9125
	I0408 22:45:44.038109   16992 main.go:141] libmachine: (addons-355098) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0408 22:45:44.038126   16992 main.go:141] libmachine: (addons-355098) creating domain...
	I0408 22:45:44.038136   16992 main.go:141] libmachine: (addons-355098) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0408 22:45:44.038147   16992 main.go:141] libmachine: (addons-355098) DBG | checking permissions on dir: /home/jenkins
	I0408 22:45:44.038156   16992 main.go:141] libmachine: (addons-355098) DBG | checking permissions on dir: /home
	I0408 22:45:44.038169   16992 main.go:141] libmachine: (addons-355098) DBG | skipping /home - not owner
	I0408 22:45:44.039043   16992 main.go:141] libmachine: (addons-355098) define libvirt domain using xml: 
	I0408 22:45:44.039058   16992 main.go:141] libmachine: (addons-355098) <domain type='kvm'>
	I0408 22:45:44.039066   16992 main.go:141] libmachine: (addons-355098)   <name>addons-355098</name>
	I0408 22:45:44.039071   16992 main.go:141] libmachine: (addons-355098)   <memory unit='MiB'>4000</memory>
	I0408 22:45:44.039076   16992 main.go:141] libmachine: (addons-355098)   <vcpu>2</vcpu>
	I0408 22:45:44.039079   16992 main.go:141] libmachine: (addons-355098)   <features>
	I0408 22:45:44.039084   16992 main.go:141] libmachine: (addons-355098)     <acpi/>
	I0408 22:45:44.039088   16992 main.go:141] libmachine: (addons-355098)     <apic/>
	I0408 22:45:44.039092   16992 main.go:141] libmachine: (addons-355098)     <pae/>
	I0408 22:45:44.039098   16992 main.go:141] libmachine: (addons-355098)     
	I0408 22:45:44.039102   16992 main.go:141] libmachine: (addons-355098)   </features>
	I0408 22:45:44.039123   16992 main.go:141] libmachine: (addons-355098)   <cpu mode='host-passthrough'>
	I0408 22:45:44.039181   16992 main.go:141] libmachine: (addons-355098)   
	I0408 22:45:44.039203   16992 main.go:141] libmachine: (addons-355098)   </cpu>
	I0408 22:45:44.039214   16992 main.go:141] libmachine: (addons-355098)   <os>
	I0408 22:45:44.039225   16992 main.go:141] libmachine: (addons-355098)     <type>hvm</type>
	I0408 22:45:44.039234   16992 main.go:141] libmachine: (addons-355098)     <boot dev='cdrom'/>
	I0408 22:45:44.039243   16992 main.go:141] libmachine: (addons-355098)     <boot dev='hd'/>
	I0408 22:45:44.039252   16992 main.go:141] libmachine: (addons-355098)     <bootmenu enable='no'/>
	I0408 22:45:44.039261   16992 main.go:141] libmachine: (addons-355098)   </os>
	I0408 22:45:44.039269   16992 main.go:141] libmachine: (addons-355098)   <devices>
	I0408 22:45:44.039282   16992 main.go:141] libmachine: (addons-355098)     <disk type='file' device='cdrom'>
	I0408 22:45:44.039299   16992 main.go:141] libmachine: (addons-355098)       <source file='/home/jenkins/minikube-integration/20501-9125/.minikube/machines/addons-355098/boot2docker.iso'/>
	I0408 22:45:44.039309   16992 main.go:141] libmachine: (addons-355098)       <target dev='hdc' bus='scsi'/>
	I0408 22:45:44.039320   16992 main.go:141] libmachine: (addons-355098)       <readonly/>
	I0408 22:45:44.039329   16992 main.go:141] libmachine: (addons-355098)     </disk>
	I0408 22:45:44.039346   16992 main.go:141] libmachine: (addons-355098)     <disk type='file' device='disk'>
	I0408 22:45:44.039361   16992 main.go:141] libmachine: (addons-355098)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0408 22:45:44.039376   16992 main.go:141] libmachine: (addons-355098)       <source file='/home/jenkins/minikube-integration/20501-9125/.minikube/machines/addons-355098/addons-355098.rawdisk'/>
	I0408 22:45:44.039387   16992 main.go:141] libmachine: (addons-355098)       <target dev='hda' bus='virtio'/>
	I0408 22:45:44.039397   16992 main.go:141] libmachine: (addons-355098)     </disk>
	I0408 22:45:44.039407   16992 main.go:141] libmachine: (addons-355098)     <interface type='network'>
	I0408 22:45:44.039429   16992 main.go:141] libmachine: (addons-355098)       <source network='mk-addons-355098'/>
	I0408 22:45:44.039442   16992 main.go:141] libmachine: (addons-355098)       <model type='virtio'/>
	I0408 22:45:44.039455   16992 main.go:141] libmachine: (addons-355098)     </interface>
	I0408 22:45:44.039471   16992 main.go:141] libmachine: (addons-355098)     <interface type='network'>
	I0408 22:45:44.039482   16992 main.go:141] libmachine: (addons-355098)       <source network='default'/>
	I0408 22:45:44.039492   16992 main.go:141] libmachine: (addons-355098)       <model type='virtio'/>
	I0408 22:45:44.039503   16992 main.go:141] libmachine: (addons-355098)     </interface>
	I0408 22:45:44.039517   16992 main.go:141] libmachine: (addons-355098)     <serial type='pty'>
	I0408 22:45:44.039529   16992 main.go:141] libmachine: (addons-355098)       <target port='0'/>
	I0408 22:45:44.039538   16992 main.go:141] libmachine: (addons-355098)     </serial>
	I0408 22:45:44.039546   16992 main.go:141] libmachine: (addons-355098)     <console type='pty'>
	I0408 22:45:44.039560   16992 main.go:141] libmachine: (addons-355098)       <target type='serial' port='0'/>
	I0408 22:45:44.039571   16992 main.go:141] libmachine: (addons-355098)     </console>
	I0408 22:45:44.039580   16992 main.go:141] libmachine: (addons-355098)     <rng model='virtio'>
	I0408 22:45:44.039597   16992 main.go:141] libmachine: (addons-355098)       <backend model='random'>/dev/random</backend>
	I0408 22:45:44.039609   16992 main.go:141] libmachine: (addons-355098)     </rng>
	I0408 22:45:44.039616   16992 main.go:141] libmachine: (addons-355098)     
	I0408 22:45:44.039623   16992 main.go:141] libmachine: (addons-355098)     
	I0408 22:45:44.039627   16992 main.go:141] libmachine: (addons-355098)   </devices>
	I0408 22:45:44.039631   16992 main.go:141] libmachine: (addons-355098) </domain>
	I0408 22:45:44.039640   16992 main.go:141] libmachine: (addons-355098) 
	I0408 22:45:44.045871   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:b2:c5:82 in network default
	I0408 22:45:44.046376   16992 main.go:141] libmachine: (addons-355098) starting domain...
	I0408 22:45:44.046389   16992 main.go:141] libmachine: (addons-355098) ensuring networks are active...
	I0408 22:45:44.046400   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:45:44.047064   16992 main.go:141] libmachine: (addons-355098) Ensuring network default is active
	I0408 22:45:44.047340   16992 main.go:141] libmachine: (addons-355098) Ensuring network mk-addons-355098 is active
	I0408 22:45:44.048657   16992 main.go:141] libmachine: (addons-355098) getting domain XML...
	I0408 22:45:44.049283   16992 main.go:141] libmachine: (addons-355098) creating domain...
	I0408 22:45:45.412859   16992 main.go:141] libmachine: (addons-355098) waiting for IP...
	I0408 22:45:45.413608   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:45:45.414029   16992 main.go:141] libmachine: (addons-355098) DBG | unable to find current IP address of domain addons-355098 in network mk-addons-355098
	I0408 22:45:45.414146   16992 main.go:141] libmachine: (addons-355098) DBG | I0408 22:45:45.414055   17014 retry.go:31] will retry after 259.966283ms: waiting for domain to come up
	I0408 22:45:45.675754   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:45:45.676278   16992 main.go:141] libmachine: (addons-355098) DBG | unable to find current IP address of domain addons-355098 in network mk-addons-355098
	I0408 22:45:45.676303   16992 main.go:141] libmachine: (addons-355098) DBG | I0408 22:45:45.676215   17014 retry.go:31] will retry after 339.919022ms: waiting for domain to come up
	I0408 22:45:46.017954   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:45:46.018442   16992 main.go:141] libmachine: (addons-355098) DBG | unable to find current IP address of domain addons-355098 in network mk-addons-355098
	I0408 22:45:46.018495   16992 main.go:141] libmachine: (addons-355098) DBG | I0408 22:45:46.018419   17014 retry.go:31] will retry after 443.835831ms: waiting for domain to come up
	I0408 22:45:46.464014   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:45:46.464460   16992 main.go:141] libmachine: (addons-355098) DBG | unable to find current IP address of domain addons-355098 in network mk-addons-355098
	I0408 22:45:46.464484   16992 main.go:141] libmachine: (addons-355098) DBG | I0408 22:45:46.464431   17014 retry.go:31] will retry after 383.333119ms: waiting for domain to come up
	I0408 22:45:46.849021   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:45:46.849445   16992 main.go:141] libmachine: (addons-355098) DBG | unable to find current IP address of domain addons-355098 in network mk-addons-355098
	I0408 22:45:46.849473   16992 main.go:141] libmachine: (addons-355098) DBG | I0408 22:45:46.849416   17014 retry.go:31] will retry after 602.996761ms: waiting for domain to come up
	I0408 22:45:47.454228   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:45:47.454636   16992 main.go:141] libmachine: (addons-355098) DBG | unable to find current IP address of domain addons-355098 in network mk-addons-355098
	I0408 22:45:47.454650   16992 main.go:141] libmachine: (addons-355098) DBG | I0408 22:45:47.454609   17014 retry.go:31] will retry after 804.280234ms: waiting for domain to come up
	I0408 22:45:48.260175   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:45:48.260617   16992 main.go:141] libmachine: (addons-355098) DBG | unable to find current IP address of domain addons-355098 in network mk-addons-355098
	I0408 22:45:48.260687   16992 main.go:141] libmachine: (addons-355098) DBG | I0408 22:45:48.260602   17014 retry.go:31] will retry after 1.063201708s: waiting for domain to come up
	I0408 22:45:49.324908   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:45:49.325424   16992 main.go:141] libmachine: (addons-355098) DBG | unable to find current IP address of domain addons-355098 in network mk-addons-355098
	I0408 22:45:49.325450   16992 main.go:141] libmachine: (addons-355098) DBG | I0408 22:45:49.325383   17014 retry.go:31] will retry after 918.968645ms: waiting for domain to come up
	I0408 22:45:50.245607   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:45:50.246073   16992 main.go:141] libmachine: (addons-355098) DBG | unable to find current IP address of domain addons-355098 in network mk-addons-355098
	I0408 22:45:50.246092   16992 main.go:141] libmachine: (addons-355098) DBG | I0408 22:45:50.246044   17014 retry.go:31] will retry after 1.723068988s: waiting for domain to come up
	I0408 22:45:51.970293   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:45:51.970749   16992 main.go:141] libmachine: (addons-355098) DBG | unable to find current IP address of domain addons-355098 in network mk-addons-355098
	I0408 22:45:51.970774   16992 main.go:141] libmachine: (addons-355098) DBG | I0408 22:45:51.970712   17014 retry.go:31] will retry after 1.465146862s: waiting for domain to come up
	I0408 22:45:53.437682   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:45:53.438111   16992 main.go:141] libmachine: (addons-355098) DBG | unable to find current IP address of domain addons-355098 in network mk-addons-355098
	I0408 22:45:53.438148   16992 main.go:141] libmachine: (addons-355098) DBG | I0408 22:45:53.438100   17014 retry.go:31] will retry after 1.84855198s: waiting for domain to come up
	I0408 22:45:55.289180   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:45:55.289527   16992 main.go:141] libmachine: (addons-355098) DBG | unable to find current IP address of domain addons-355098 in network mk-addons-355098
	I0408 22:45:55.289550   16992 main.go:141] libmachine: (addons-355098) DBG | I0408 22:45:55.289510   17014 retry.go:31] will retry after 2.640295151s: waiting for domain to come up
	I0408 22:45:57.933168   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:45:57.933487   16992 main.go:141] libmachine: (addons-355098) DBG | unable to find current IP address of domain addons-355098 in network mk-addons-355098
	I0408 22:45:57.933547   16992 main.go:141] libmachine: (addons-355098) DBG | I0408 22:45:57.933455   17014 retry.go:31] will retry after 4.392644163s: waiting for domain to come up
	I0408 22:46:02.328797   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:02.329232   16992 main.go:141] libmachine: (addons-355098) DBG | unable to find current IP address of domain addons-355098 in network mk-addons-355098
	I0408 22:46:02.329261   16992 main.go:141] libmachine: (addons-355098) DBG | I0408 22:46:02.329185   17014 retry.go:31] will retry after 4.511956687s: waiting for domain to come up
	I0408 22:46:06.845027   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:06.845439   16992 main.go:141] libmachine: (addons-355098) found domain IP: 192.168.39.199
	I0408 22:46:06.845471   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has current primary IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:06.845477   16992 main.go:141] libmachine: (addons-355098) reserving static IP address...
	I0408 22:46:06.845769   16992 main.go:141] libmachine: (addons-355098) DBG | unable to find host DHCP lease matching {name: "addons-355098", mac: "52:54:00:ca:e2:e3", ip: "192.168.39.199"} in network mk-addons-355098
	I0408 22:46:06.915087   16992 main.go:141] libmachine: (addons-355098) reserved static IP address 192.168.39.199 for domain addons-355098
	I0408 22:46:06.915110   16992 main.go:141] libmachine: (addons-355098) DBG | Getting to WaitForSSH function...
	I0408 22:46:06.915117   16992 main.go:141] libmachine: (addons-355098) waiting for SSH...
	I0408 22:46:06.917296   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:06.917644   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:06.917666   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:06.917802   16992 main.go:141] libmachine: (addons-355098) DBG | Using SSH client type: external
	I0408 22:46:06.917838   16992 main.go:141] libmachine: (addons-355098) DBG | Using SSH private key: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/addons-355098/id_rsa (-rw-------)
	I0408 22:46:06.917866   16992 main.go:141] libmachine: (addons-355098) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.199 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20501-9125/.minikube/machines/addons-355098/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 22:46:06.917881   16992 main.go:141] libmachine: (addons-355098) DBG | About to run SSH command:
	I0408 22:46:06.917891   16992 main.go:141] libmachine: (addons-355098) DBG | exit 0
	I0408 22:46:07.047494   16992 main.go:141] libmachine: (addons-355098) DBG | SSH cmd err, output: <nil>: 
	I0408 22:46:07.047750   16992 main.go:141] libmachine: (addons-355098) KVM machine creation complete
	I0408 22:46:07.048033   16992 main.go:141] libmachine: (addons-355098) Calling .GetConfigRaw
	I0408 22:46:07.048548   16992 main.go:141] libmachine: (addons-355098) Calling .DriverName
	I0408 22:46:07.048754   16992 main.go:141] libmachine: (addons-355098) Calling .DriverName
	I0408 22:46:07.048903   16992 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0408 22:46:07.048918   16992 main.go:141] libmachine: (addons-355098) Calling .GetState
	I0408 22:46:07.049982   16992 main.go:141] libmachine: Detecting operating system of created instance...
	I0408 22:46:07.049995   16992 main.go:141] libmachine: Waiting for SSH to be available...
	I0408 22:46:07.050015   16992 main.go:141] libmachine: Getting to WaitForSSH function...
	I0408 22:46:07.050027   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHHostname
	I0408 22:46:07.052192   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:07.052516   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:07.052537   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:07.052647   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHPort
	I0408 22:46:07.052798   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:07.052943   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:07.053044   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHUsername
	I0408 22:46:07.053197   16992 main.go:141] libmachine: Using SSH client type: native
	I0408 22:46:07.053417   16992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0408 22:46:07.053429   16992 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0408 22:46:07.150640   16992 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 22:46:07.150662   16992 main.go:141] libmachine: Detecting the provisioner...
	I0408 22:46:07.150676   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHHostname
	I0408 22:46:07.153133   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:07.153462   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:07.153499   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:07.153642   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHPort
	I0408 22:46:07.153812   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:07.153940   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:07.154059   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHUsername
	I0408 22:46:07.154219   16992 main.go:141] libmachine: Using SSH client type: native
	I0408 22:46:07.154408   16992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0408 22:46:07.154424   16992 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0408 22:46:07.251986   16992 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0408 22:46:07.252091   16992 main.go:141] libmachine: found compatible host: buildroot
	I0408 22:46:07.252111   16992 main.go:141] libmachine: Provisioning with buildroot...
	I0408 22:46:07.252128   16992 main.go:141] libmachine: (addons-355098) Calling .GetMachineName
	I0408 22:46:07.252375   16992 buildroot.go:166] provisioning hostname "addons-355098"
	I0408 22:46:07.252398   16992 main.go:141] libmachine: (addons-355098) Calling .GetMachineName
	I0408 22:46:07.252568   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHHostname
	I0408 22:46:07.254747   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:07.255063   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:07.255081   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:07.255218   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHPort
	I0408 22:46:07.255375   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:07.255503   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:07.255619   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHUsername
	I0408 22:46:07.255737   16992 main.go:141] libmachine: Using SSH client type: native
	I0408 22:46:07.256008   16992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0408 22:46:07.256020   16992 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-355098 && echo "addons-355098" | sudo tee /etc/hostname
	I0408 22:46:07.369291   16992 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-355098
	
	I0408 22:46:07.369314   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHHostname
	I0408 22:46:07.371973   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:07.372260   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:07.372285   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:07.372428   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHPort
	I0408 22:46:07.372610   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:07.372742   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:07.372903   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHUsername
	I0408 22:46:07.373081   16992 main.go:141] libmachine: Using SSH client type: native
	I0408 22:46:07.373279   16992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0408 22:46:07.373294   16992 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-355098' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-355098/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-355098' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 22:46:07.479892   16992 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 22:46:07.479925   16992 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20501-9125/.minikube CaCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20501-9125/.minikube}
	I0408 22:46:07.479952   16992 buildroot.go:174] setting up certificates
	I0408 22:46:07.479969   16992 provision.go:84] configureAuth start
	I0408 22:46:07.479980   16992 main.go:141] libmachine: (addons-355098) Calling .GetMachineName
	I0408 22:46:07.480247   16992 main.go:141] libmachine: (addons-355098) Calling .GetIP
	I0408 22:46:07.482761   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:07.483081   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:07.483104   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:07.483284   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHHostname
	I0408 22:46:07.485343   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:07.485642   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:07.485667   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:07.485765   16992 provision.go:143] copyHostCerts
	I0408 22:46:07.485826   16992 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem (1082 bytes)
	I0408 22:46:07.485964   16992 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem (1123 bytes)
	I0408 22:46:07.486043   16992 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem (1675 bytes)
	I0408 22:46:07.486092   16992 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem org=jenkins.addons-355098 san=[127.0.0.1 192.168.39.199 addons-355098 localhost minikube]
	I0408 22:46:07.705721   16992 provision.go:177] copyRemoteCerts
	I0408 22:46:07.705776   16992 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 22:46:07.705800   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHHostname
	I0408 22:46:07.708237   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:07.708566   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:07.708591   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:07.708741   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHPort
	I0408 22:46:07.708939   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:07.709068   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHUsername
	I0408 22:46:07.709224   16992 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/addons-355098/id_rsa Username:docker}
	I0408 22:46:07.789689   16992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 22:46:07.811283   16992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0408 22:46:07.832170   16992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0408 22:46:07.852780   16992 provision.go:87] duration metric: took 372.798718ms to configureAuth
	I0408 22:46:07.852803   16992 buildroot.go:189] setting minikube options for container-runtime
	I0408 22:46:07.852960   16992 config.go:182] Loaded profile config "addons-355098": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 22:46:07.853032   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHHostname
	I0408 22:46:07.856783   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:07.857209   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:07.857241   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:07.857376   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHPort
	I0408 22:46:07.857553   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:07.857706   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:07.857818   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHUsername
	I0408 22:46:07.857964   16992 main.go:141] libmachine: Using SSH client type: native
	I0408 22:46:07.858153   16992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0408 22:46:07.858181   16992 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 22:46:08.070830   16992 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 22:46:08.070859   16992 main.go:141] libmachine: Checking connection to Docker...
	I0408 22:46:08.070870   16992 main.go:141] libmachine: (addons-355098) Calling .GetURL
	I0408 22:46:08.072055   16992 main.go:141] libmachine: (addons-355098) DBG | using libvirt version 6000000
	I0408 22:46:08.074035   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:08.074355   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:08.074383   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:08.074528   16992 main.go:141] libmachine: Docker is up and running!
	I0408 22:46:08.074541   16992 main.go:141] libmachine: Reticulating splines...
	I0408 22:46:08.074548   16992 client.go:171] duration metric: took 24.834775345s to LocalClient.Create
	I0408 22:46:08.074568   16992 start.go:167] duration metric: took 24.834827379s to libmachine.API.Create "addons-355098"
	I0408 22:46:08.074577   16992 start.go:293] postStartSetup for "addons-355098" (driver="kvm2")
	I0408 22:46:08.074584   16992 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 22:46:08.074600   16992 main.go:141] libmachine: (addons-355098) Calling .DriverName
	I0408 22:46:08.074792   16992 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 22:46:08.074812   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHHostname
	I0408 22:46:08.076929   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:08.077300   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:08.077325   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:08.077496   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHPort
	I0408 22:46:08.077654   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:08.077891   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHUsername
	I0408 22:46:08.078018   16992 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/addons-355098/id_rsa Username:docker}
	I0408 22:46:08.157888   16992 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 22:46:08.161624   16992 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 22:46:08.161648   16992 filesync.go:126] Scanning /home/jenkins/minikube-integration/20501-9125/.minikube/addons for local assets ...
	I0408 22:46:08.161712   16992 filesync.go:126] Scanning /home/jenkins/minikube-integration/20501-9125/.minikube/files for local assets ...
	I0408 22:46:08.161743   16992 start.go:296] duration metric: took 87.160425ms for postStartSetup
	I0408 22:46:08.161789   16992 main.go:141] libmachine: (addons-355098) Calling .GetConfigRaw
	I0408 22:46:08.162295   16992 main.go:141] libmachine: (addons-355098) Calling .GetIP
	I0408 22:46:08.164715   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:08.165035   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:08.165063   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:08.165254   16992 profile.go:143] Saving config to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/config.json ...
	I0408 22:46:08.165451   16992 start.go:128] duration metric: took 24.943262211s to createHost
	I0408 22:46:08.165478   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHHostname
	I0408 22:46:08.167458   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:08.167701   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:08.167728   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:08.167833   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHPort
	I0408 22:46:08.168070   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:08.168313   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:08.168440   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHUsername
	I0408 22:46:08.168613   16992 main.go:141] libmachine: Using SSH client type: native
	I0408 22:46:08.168820   16992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0408 22:46:08.168829   16992 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 22:46:08.267895   16992 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744152368.245196259
	
	I0408 22:46:08.267920   16992 fix.go:216] guest clock: 1744152368.245196259
	I0408 22:46:08.267928   16992 fix.go:229] Guest: 2025-04-08 22:46:08.245196259 +0000 UTC Remote: 2025-04-08 22:46:08.165464431 +0000 UTC m=+25.045320325 (delta=79.731828ms)
	I0408 22:46:08.267945   16992 fix.go:200] guest clock delta is within tolerance: 79.731828ms
	I0408 22:46:08.267950   16992 start.go:83] releasing machines lock for "addons-355098", held for 25.045849978s
	I0408 22:46:08.267983   16992 main.go:141] libmachine: (addons-355098) Calling .DriverName
	I0408 22:46:08.268246   16992 main.go:141] libmachine: (addons-355098) Calling .GetIP
	I0408 22:46:08.270885   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:08.271206   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:08.271239   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:08.271390   16992 main.go:141] libmachine: (addons-355098) Calling .DriverName
	I0408 22:46:08.271806   16992 main.go:141] libmachine: (addons-355098) Calling .DriverName
	I0408 22:46:08.271986   16992 main.go:141] libmachine: (addons-355098) Calling .DriverName
	I0408 22:46:08.272115   16992 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 22:46:08.272157   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHHostname
	I0408 22:46:08.272196   16992 ssh_runner.go:195] Run: cat /version.json
	I0408 22:46:08.272214   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHHostname
	I0408 22:46:08.274755   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:08.275040   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:08.275082   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:08.275113   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:08.275224   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHPort
	I0408 22:46:08.275357   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:08.275466   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:08.275475   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHUsername
	I0408 22:46:08.275491   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:08.275587   16992 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/addons-355098/id_rsa Username:docker}
	I0408 22:46:08.275648   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHPort
	I0408 22:46:08.275804   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:08.275949   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHUsername
	I0408 22:46:08.276081   16992 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/addons-355098/id_rsa Username:docker}
	I0408 22:46:08.348203   16992 ssh_runner.go:195] Run: systemctl --version
	I0408 22:46:08.382505   16992 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 22:46:08.535198   16992 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 22:46:08.540544   16992 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 22:46:08.540590   16992 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 22:46:08.555584   16992 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 22:46:08.555604   16992 start.go:495] detecting cgroup driver to use...
	I0408 22:46:08.555652   16992 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 22:46:08.570568   16992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 22:46:08.582306   16992 docker.go:217] disabling cri-docker service (if available) ...
	I0408 22:46:08.582347   16992 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 22:46:08.594411   16992 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 22:46:08.606740   16992 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 22:46:08.714153   16992 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 22:46:08.850162   16992 docker.go:233] disabling docker service ...
	I0408 22:46:08.850218   16992 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 22:46:08.863926   16992 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 22:46:08.876144   16992 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 22:46:09.002342   16992 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 22:46:09.116451   16992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 22:46:09.129927   16992 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 22:46:09.147351   16992 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0408 22:46:09.147416   16992 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:46:09.157051   16992 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 22:46:09.157103   16992 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:46:09.166265   16992 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:46:09.175080   16992 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:46:09.184072   16992 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 22:46:09.193642   16992 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:46:09.202775   16992 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:46:09.217896   16992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:46:09.226901   16992 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 22:46:09.235086   16992 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 22:46:09.235123   16992 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 22:46:09.246052   16992 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 22:46:09.254192   16992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 22:46:09.367975   16992 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 22:46:09.452583   16992 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 22:46:09.452694   16992 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 22:46:09.457412   16992 start.go:563] Will wait 60s for crictl version
	I0408 22:46:09.457465   16992 ssh_runner.go:195] Run: which crictl
	I0408 22:46:09.460789   16992 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 22:46:09.496225   16992 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 22:46:09.496341   16992 ssh_runner.go:195] Run: crio --version
	I0408 22:46:09.522507   16992 ssh_runner.go:195] Run: crio --version
	I0408 22:46:09.550926   16992 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0408 22:46:09.552020   16992 main.go:141] libmachine: (addons-355098) Calling .GetIP
	I0408 22:46:09.554683   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:09.555016   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:09.555037   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:09.555331   16992 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 22:46:09.558875   16992 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 22:46:09.569902   16992 kubeadm.go:883] updating cluster {Name:addons-355098 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-355098 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.199 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 22:46:09.570012   16992 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0408 22:46:09.570066   16992 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 22:46:09.599447   16992 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0408 22:46:09.599507   16992 ssh_runner.go:195] Run: which lz4
	I0408 22:46:09.602880   16992 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0408 22:46:09.606502   16992 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 22:46:09.606524   16992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0408 22:46:10.755983   16992 crio.go:462] duration metric: took 1.153124718s to copy over tarball
	I0408 22:46:10.756075   16992 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 22:46:12.911146   16992 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.155033698s)
	I0408 22:46:12.911189   16992 crio.go:469] duration metric: took 2.155177953s to extract the tarball
	I0408 22:46:12.911196   16992 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 22:46:12.947947   16992 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 22:46:12.988026   16992 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 22:46:12.988049   16992 cache_images.go:84] Images are preloaded, skipping loading
	I0408 22:46:12.988056   16992 kubeadm.go:934] updating node { 192.168.39.199 8443 v1.32.2 crio true true} ...
	I0408 22:46:12.988152   16992 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-355098 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.199
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:addons-355098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 22:46:12.988634   16992 ssh_runner.go:195] Run: crio config
	I0408 22:46:13.035456   16992 cni.go:84] Creating CNI manager for ""
	I0408 22:46:13.035479   16992 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 22:46:13.035490   16992 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 22:46:13.035509   16992 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.199 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-355098 NodeName:addons-355098 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.199"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.199 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 22:46:13.035629   16992 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.199
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-355098"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.199"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.199"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 22:46:13.035686   16992 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0408 22:46:13.044917   16992 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 22:46:13.044990   16992 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 22:46:13.053517   16992 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0408 22:46:13.069902   16992 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 22:46:13.085338   16992 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0408 22:46:13.101412   16992 ssh_runner.go:195] Run: grep 192.168.39.199	control-plane.minikube.internal$ /etc/hosts
	I0408 22:46:13.105036   16992 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.199	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 22:46:13.116696   16992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 22:46:13.236695   16992 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 22:46:13.252632   16992 certs.go:68] Setting up /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098 for IP: 192.168.39.199
	I0408 22:46:13.252650   16992 certs.go:194] generating shared ca certs ...
	I0408 22:46:13.252664   16992 certs.go:226] acquiring lock for ca certs: {Name:mk0d455aae85017ac942481bbc1202ccedea144f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 22:46:13.252783   16992 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.key
	I0408 22:46:13.437384   16992 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt ...
	I0408 22:46:13.437410   16992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt: {Name:mkc69d7bbda3447e0308b18a77fd91858fa8927a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 22:46:13.437561   16992 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20501-9125/.minikube/ca.key ...
	I0408 22:46:13.437571   16992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20501-9125/.minikube/ca.key: {Name:mkf2fc2e863017ea2687ee0ba3066d4ab0fa87a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 22:46:13.437639   16992 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.key
	I0408 22:46:13.786722   16992 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.crt ...
	I0408 22:46:13.786749   16992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.crt: {Name:mkf6a746759bbfda5ec1efdebea3f2a623362b4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 22:46:13.786899   16992 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.key ...
	I0408 22:46:13.786909   16992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.key: {Name:mkf761cf2609b4b1e545cd485185f37c3884fd85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 22:46:13.786992   16992 certs.go:256] generating profile certs ...
	I0408 22:46:13.787052   16992 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.key
	I0408 22:46:13.787066   16992 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt with IP's: []
	I0408 22:46:13.988712   16992 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt ...
	I0408 22:46:13.988741   16992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: {Name:mk5de13887f100e357ac9d97a0f8ff4a7ffd03df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 22:46:13.988883   16992 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.key ...
	I0408 22:46:13.988893   16992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.key: {Name:mk2f47a1858a0f23737a35f3508114a1ca6ee676 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 22:46:13.988958   16992 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/apiserver.key.426013b8
	I0408 22:46:13.988975   16992 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/apiserver.crt.426013b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.199]
	I0408 22:46:14.174553   16992 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/apiserver.crt.426013b8 ...
	I0408 22:46:14.174581   16992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/apiserver.crt.426013b8: {Name:mk6180f4ae508fb7192a7e2cca6743114ecb7671 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 22:46:14.174721   16992 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/apiserver.key.426013b8 ...
	I0408 22:46:14.174734   16992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/apiserver.key.426013b8: {Name:mk7be505e3d974dd1138b32edffaa82b0ccd2f23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 22:46:14.174804   16992 certs.go:381] copying /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/apiserver.crt.426013b8 -> /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/apiserver.crt
	I0408 22:46:14.174889   16992 certs.go:385] copying /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/apiserver.key.426013b8 -> /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/apiserver.key
	I0408 22:46:14.174948   16992 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/proxy-client.key
	I0408 22:46:14.174964   16992 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/proxy-client.crt with IP's: []
	I0408 22:46:14.785497   16992 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/proxy-client.crt ...
	I0408 22:46:14.785525   16992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/proxy-client.crt: {Name:mk7b689f8e7baa3d1919aef378fb1d54e73d62f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 22:46:14.785711   16992 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/proxy-client.key ...
	I0408 22:46:14.785726   16992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/proxy-client.key: {Name:mkbf149ef0b9eb2a094aef382887e72d8f586632 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 22:46:14.785925   16992 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 22:46:14.785959   16992 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem (1082 bytes)
	I0408 22:46:14.785982   16992 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem (1123 bytes)
	I0408 22:46:14.786012   16992 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem (1675 bytes)
	I0408 22:46:14.786518   16992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 22:46:14.818260   16992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 22:46:14.859720   16992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 22:46:14.886400   16992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0408 22:46:14.909511   16992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0408 22:46:14.930379   16992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0408 22:46:14.951299   16992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 22:46:14.971952   16992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 22:46:14.992355   16992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 22:46:15.013040   16992 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0408 22:46:15.028132   16992 ssh_runner.go:195] Run: openssl version
	I0408 22:46:15.033332   16992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 22:46:15.042822   16992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 22:46:15.046712   16992 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 22:46 /usr/share/ca-certificates/minikubeCA.pem
	I0408 22:46:15.046756   16992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 22:46:15.051946   16992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 22:46:15.061784   16992 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 22:46:15.065664   16992 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0408 22:46:15.065717   16992 kubeadm.go:392] StartCluster: {Name:addons-355098 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-355098 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.199 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 22:46:15.065801   16992 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 22:46:15.065854   16992 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 22:46:15.101384   16992 cri.go:89] found id: ""
	I0408 22:46:15.101449   16992 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0408 22:46:15.114024   16992 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 22:46:15.122488   16992 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 22:46:15.130672   16992 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 22:46:15.130686   16992 kubeadm.go:157] found existing configuration files:
	
	I0408 22:46:15.130719   16992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 22:46:15.138452   16992 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 22:46:15.138492   16992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 22:46:15.146351   16992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 22:46:15.153849   16992 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 22:46:15.153885   16992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 22:46:15.161913   16992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 22:46:15.169501   16992 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 22:46:15.169536   16992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 22:46:15.177293   16992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 22:46:15.184887   16992 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 22:46:15.184928   16992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 22:46:15.192839   16992 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 22:46:15.243241   16992 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0408 22:46:15.243376   16992 kubeadm.go:310] [preflight] Running pre-flight checks
	I0408 22:46:15.346346   16992 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 22:46:15.346502   16992 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 22:46:15.346611   16992 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0408 22:46:15.356285   16992 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 22:46:15.479075   16992 out.go:235]   - Generating certificates and keys ...
	I0408 22:46:15.479253   16992 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0408 22:46:15.479351   16992 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0408 22:46:15.479506   16992 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0408 22:46:15.783940   16992 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0408 22:46:15.986136   16992 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0408 22:46:16.111707   16992 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0408 22:46:16.554955   16992 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0408 22:46:16.555118   16992 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-355098 localhost] and IPs [192.168.39.199 127.0.0.1 ::1]
	I0408 22:46:16.686823   16992 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0408 22:46:16.687696   16992 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-355098 localhost] and IPs [192.168.39.199 127.0.0.1 ::1]
	I0408 22:46:16.952405   16992 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0408 22:46:17.019552   16992 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0408 22:46:17.254786   16992 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0408 22:46:17.255042   16992 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 22:46:17.697627   16992 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 22:46:17.764394   16992 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0408 22:46:18.313000   16992 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 22:46:18.416903   16992 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 22:46:18.468023   16992 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 22:46:18.468691   16992 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 22:46:18.471099   16992 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 22:46:18.472996   16992 out.go:235]   - Booting up control plane ...
	I0408 22:46:18.473109   16992 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 22:46:18.473223   16992 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 22:46:18.473322   16992 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 22:46:18.488729   16992 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 22:46:18.494091   16992 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 22:46:18.494158   16992 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0408 22:46:18.618883   16992 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0408 22:46:18.619039   16992 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0408 22:46:19.119182   16992 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 500.835753ms
	I0408 22:46:19.119268   16992 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0408 22:46:24.117815   16992 kubeadm.go:310] [api-check] The API server is healthy after 5.00117854s
	I0408 22:46:24.132764   16992 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 22:46:24.145227   16992 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 22:46:24.175685   16992 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 22:46:24.175981   16992 kubeadm.go:310] [mark-control-plane] Marking the node addons-355098 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 22:46:24.187378   16992 kubeadm.go:310] [bootstrap-token] Using token: 8v3h9k.fzjhloij3js3giaf
	I0408 22:46:24.188522   16992 out.go:235]   - Configuring RBAC rules ...
	I0408 22:46:24.188649   16992 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 22:46:24.197043   16992 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 22:46:24.203310   16992 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 22:46:24.206403   16992 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 22:46:24.212420   16992 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 22:46:24.222501   16992 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 22:46:24.523991   16992 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 22:46:24.959856   16992 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0408 22:46:25.525346   16992 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0408 22:46:25.526263   16992 kubeadm.go:310] 
	I0408 22:46:25.526339   16992 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0408 22:46:25.526358   16992 kubeadm.go:310] 
	I0408 22:46:25.526471   16992 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0408 22:46:25.526485   16992 kubeadm.go:310] 
	I0408 22:46:25.526519   16992 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0408 22:46:25.526597   16992 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 22:46:25.526671   16992 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 22:46:25.526680   16992 kubeadm.go:310] 
	I0408 22:46:25.526749   16992 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0408 22:46:25.526759   16992 kubeadm.go:310] 
	I0408 22:46:25.526829   16992 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 22:46:25.526842   16992 kubeadm.go:310] 
	I0408 22:46:25.526912   16992 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0408 22:46:25.527020   16992 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 22:46:25.527120   16992 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 22:46:25.527132   16992 kubeadm.go:310] 
	I0408 22:46:25.527253   16992 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 22:46:25.527364   16992 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0408 22:46:25.527375   16992 kubeadm.go:310] 
	I0408 22:46:25.527489   16992 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8v3h9k.fzjhloij3js3giaf \
	I0408 22:46:25.527656   16992 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b5297d9bc4c0ea922a06282e0375039318a097df0c51ae921cb5fce714787b8b \
	I0408 22:46:25.527691   16992 kubeadm.go:310] 	--control-plane 
	I0408 22:46:25.527702   16992 kubeadm.go:310] 
	I0408 22:46:25.527790   16992 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0408 22:46:25.527800   16992 kubeadm.go:310] 
	I0408 22:46:25.527889   16992 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8v3h9k.fzjhloij3js3giaf \
	I0408 22:46:25.528038   16992 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b5297d9bc4c0ea922a06282e0375039318a097df0c51ae921cb5fce714787b8b 
	I0408 22:46:25.528503   16992 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 22:46:25.528537   16992 cni.go:84] Creating CNI manager for ""
	I0408 22:46:25.528548   16992 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 22:46:25.530177   16992 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 22:46:25.531594   16992 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 22:46:25.542191   16992 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 22:46:25.558552   16992 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 22:46:25.558655   16992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 22:46:25.558676   16992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-355098 minikube.k8s.io/updated_at=2025_04_08T22_46_25_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=fd2f4c3eba2bd452b5997c855e28d0966165ba83 minikube.k8s.io/name=addons-355098 minikube.k8s.io/primary=true
	I0408 22:46:25.586056   16992 ops.go:34] apiserver oom_adj: -16
	I0408 22:46:25.660188   16992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 22:46:26.160254   16992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 22:46:26.661110   16992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 22:46:27.161306   16992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 22:46:27.660530   16992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 22:46:28.160437   16992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 22:46:28.660792   16992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 22:46:29.161114   16992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 22:46:29.661099   16992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 22:46:29.742960   16992 kubeadm.go:1113] duration metric: took 4.184352757s to wait for elevateKubeSystemPrivileges
	I0408 22:46:29.742989   16992 kubeadm.go:394] duration metric: took 14.677276589s to StartCluster
	I0408 22:46:29.743016   16992 settings.go:142] acquiring lock: {Name:mk362ccb6fac1c71fdd578f798171322d97c1c2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 22:46:29.743135   16992 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20501-9125/kubeconfig
	I0408 22:46:29.743469   16992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20501-9125/kubeconfig: {Name:mk92c92b166b121ee2ee28c1b362d82cfe16b47a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 22:46:29.743657   16992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0408 22:46:29.743683   16992 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.199 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 22:46:29.743766   16992 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0408 22:46:29.743911   16992 addons.go:69] Setting yakd=true in profile "addons-355098"
	I0408 22:46:29.743934   16992 addons.go:238] Setting addon yakd=true in "addons-355098"
	I0408 22:46:29.743967   16992 host.go:66] Checking if "addons-355098" exists ...
	I0408 22:46:29.743981   16992 addons.go:69] Setting gcp-auth=true in profile "addons-355098"
	I0408 22:46:29.743994   16992 config.go:182] Loaded profile config "addons-355098": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 22:46:29.744005   16992 mustload.go:65] Loading cluster: addons-355098
	I0408 22:46:29.744018   16992 addons.go:69] Setting default-storageclass=true in profile "addons-355098"
	I0408 22:46:29.744040   16992 addons.go:69] Setting storage-provisioner=true in profile "addons-355098"
	I0408 22:46:29.744040   16992 addons.go:69] Setting ingress-dns=true in profile "addons-355098"
	I0408 22:46:29.744095   16992 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-355098"
	I0408 22:46:29.744114   16992 addons.go:238] Setting addon ingress-dns=true in "addons-355098"
	I0408 22:46:29.744130   16992 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-355098"
	I0408 22:46:29.744152   16992 host.go:66] Checking if "addons-355098" exists ...
	I0408 22:46:29.743960   16992 addons.go:69] Setting inspektor-gadget=true in profile "addons-355098"
	I0408 22:46:29.744187   16992 config.go:182] Loaded profile config "addons-355098": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 22:46:29.744053   16992 addons.go:238] Setting addon storage-provisioner=true in "addons-355098"
	I0408 22:46:29.744228   16992 host.go:66] Checking if "addons-355098" exists ...
	I0408 22:46:29.744061   16992 addons.go:69] Setting metrics-server=true in profile "addons-355098"
	I0408 22:46:29.744256   16992 addons.go:238] Setting addon metrics-server=true in "addons-355098"
	I0408 22:46:29.744298   16992 host.go:66] Checking if "addons-355098" exists ...
	I0408 22:46:29.744193   16992 addons.go:238] Setting addon inspektor-gadget=true in "addons-355098"
	I0408 22:46:29.744402   16992 host.go:66] Checking if "addons-355098" exists ...
	I0408 22:46:29.744426   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.744486   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.744059   16992 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-355098"
	I0408 22:46:29.744071   16992 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-355098"
	I0408 22:46:29.744544   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.744073   16992 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-355098"
	I0408 22:46:29.744557   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.744547   16992 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-355098"
	I0408 22:46:29.744078   16992 addons.go:69] Setting registry=true in profile "addons-355098"
	I0408 22:46:29.744575   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.744583   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.744560   16992 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-355098"
	I0408 22:46:29.744615   16992 host.go:66] Checking if "addons-355098" exists ...
	I0408 22:46:29.744584   16992 addons.go:238] Setting addon registry=true in "addons-355098"
	I0408 22:46:29.744052   16992 addons.go:69] Setting ingress=true in profile "addons-355098"
	I0408 22:46:29.744643   16992 addons.go:238] Setting addon ingress=true in "addons-355098"
	I0408 22:46:29.744155   16992 host.go:66] Checking if "addons-355098" exists ...
	I0408 22:46:29.744674   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.744700   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.744751   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.744773   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.744843   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.744850   16992 host.go:66] Checking if "addons-355098" exists ...
	I0408 22:46:29.744901   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.744963   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.744074   16992 addons.go:69] Setting volcano=true in profile "addons-355098"
	I0408 22:46:29.744985   16992 host.go:66] Checking if "addons-355098" exists ...
	I0408 22:46:29.744997   16992 addons.go:238] Setting addon volcano=true in "addons-355098"
	I0408 22:46:29.744084   16992 addons.go:69] Setting cloud-spanner=true in profile "addons-355098"
	I0408 22:46:29.745079   16992 addons.go:238] Setting addon cloud-spanner=true in "addons-355098"
	I0408 22:46:29.745090   16992 host.go:66] Checking if "addons-355098" exists ...
	I0408 22:46:29.744096   16992 addons.go:69] Setting volumesnapshots=true in profile "addons-355098"
	I0408 22:46:29.745128   16992 addons.go:238] Setting addon volumesnapshots=true in "addons-355098"
	I0408 22:46:29.745184   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.745218   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.744982   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.745310   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.745351   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.745380   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.744986   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.745420   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.745452   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.744084   16992 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-355098"
	I0408 22:46:29.745497   16992 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-355098"
	I0408 22:46:29.745537   16992 out.go:177] * Verifying Kubernetes components...
	I0408 22:46:29.745756   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.745789   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.745498   16992 host.go:66] Checking if "addons-355098" exists ...
	I0408 22:46:29.746038   16992 host.go:66] Checking if "addons-355098" exists ...
	I0408 22:46:29.746574   16992 host.go:66] Checking if "addons-355098" exists ...
	I0408 22:46:29.746821   16992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 22:46:29.764187   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35261
	I0408 22:46:29.764406   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34347
	I0408 22:46:29.764677   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.764849   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.765146   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.765163   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.765191   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45671
	I0408 22:46:29.765285   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.765300   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.765596   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.765637   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.765660   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.765981   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.765996   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.766174   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.766184   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.766204   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.766281   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.766354   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.770046   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33671
	I0408 22:46:29.770437   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.770584   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38683
	I0408 22:46:29.776132   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.776151   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.776334   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.776372   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.776400   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.776414   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.776423   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.776445   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.776601   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.782327   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.782374   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.782912   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.782989   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.783072   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.783233   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.783251   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.783896   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.783985   16992 main.go:141] libmachine: (addons-355098) Calling .GetState
	I0408 22:46:29.784984   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.785024   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.785722   16992 host.go:66] Checking if "addons-355098" exists ...
	I0408 22:46:29.786080   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.786112   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.796798   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40237
	I0408 22:46:29.797389   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.797924   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.797941   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.798037   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46693
	I0408 22:46:29.798693   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.799002   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38601
	I0408 22:46:29.799185   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.799223   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.799250   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.799485   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.799920   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.799938   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.800131   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.800145   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.800306   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.800532   16992 main.go:141] libmachine: (addons-355098) Calling .GetState
	I0408 22:46:29.800718   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.800867   16992 main.go:141] libmachine: (addons-355098) Calling .GetState
	I0408 22:46:29.802355   16992 main.go:141] libmachine: (addons-355098) Calling .DriverName
	I0408 22:46:29.804889   16992 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0408 22:46:29.806082   16992 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0408 22:46:29.806103   16992 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0408 22:46:29.806121   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHHostname
	I0408 22:46:29.807610   16992 main.go:141] libmachine: (addons-355098) Calling .DriverName
	I0408 22:46:29.809054   16992 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0408 22:46:29.809929   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:29.810182   16992 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0408 22:46:29.810198   16992 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0408 22:46:29.810215   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHHostname
	I0408 22:46:29.810416   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:29.810441   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:29.810677   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHPort
	I0408 22:46:29.810894   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:29.811098   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHUsername
	I0408 22:46:29.811302   16992 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/addons-355098/id_rsa Username:docker}
	I0408 22:46:29.811735   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40989
	I0408 22:46:29.812248   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.812872   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.812891   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.813458   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:29.813596   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.814232   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.814276   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.814544   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHPort
	I0408 22:46:29.814639   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:29.814655   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:29.814689   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:29.814878   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHUsername
	I0408 22:46:29.815070   16992 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/addons-355098/id_rsa Username:docker}
	I0408 22:46:29.816069   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43403
	I0408 22:46:29.817284   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41579
	I0408 22:46:29.818436   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.818946   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.818965   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.819371   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.819948   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.819986   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.820422   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.822365   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.822383   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.823458   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.823644   16992 main.go:141] libmachine: (addons-355098) Calling .GetState
	I0408 22:46:29.827585   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41609
	I0408 22:46:29.828071   16992 addons.go:238] Setting addon default-storageclass=true in "addons-355098"
	I0408 22:46:29.828114   16992 host.go:66] Checking if "addons-355098" exists ...
	I0408 22:46:29.828496   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.828529   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.828875   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.829258   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.829279   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.829688   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.829860   16992 main.go:141] libmachine: (addons-355098) Calling .GetState
	I0408 22:46:29.830442   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43469
	I0408 22:46:29.831738   16992 main.go:141] libmachine: (addons-355098) Calling .DriverName
	I0408 22:46:29.832315   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.832720   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.832739   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.833242   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.833884   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.833921   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.834139   16992 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0408 22:46:29.835337   16992 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0408 22:46:29.835357   16992 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0408 22:46:29.835377   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHHostname
	I0408 22:46:29.837437   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44819
	I0408 22:46:29.837985   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.838493   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.838534   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.839051   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.839633   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.839672   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.839893   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:29.839915   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:29.839940   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:29.840069   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHPort
	I0408 22:46:29.840218   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:29.840346   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHUsername
	I0408 22:46:29.840459   16992 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/addons-355098/id_rsa Username:docker}
	I0408 22:46:29.844160   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40581
	I0408 22:46:29.844958   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.845512   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.845527   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.845904   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.846078   16992 main.go:141] libmachine: (addons-355098) Calling .GetState
	I0408 22:46:29.846128   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37803
	I0408 22:46:29.847772   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.848073   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33363
	I0408 22:46:29.848545   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.848617   16992 main.go:141] libmachine: (addons-355098) Calling .DriverName
	I0408 22:46:29.849005   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.849042   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.849049   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.849059   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.849445   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.849488   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.850122   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.850183   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.850288   16992 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 22:46:29.851065   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.851106   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.851625   16992 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 22:46:29.851651   16992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 22:46:29.851670   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHHostname
	I0408 22:46:29.855347   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:29.855795   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:29.855827   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:29.856062   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHPort
	I0408 22:46:29.856241   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:29.856442   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHUsername
	I0408 22:46:29.856596   16992 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/addons-355098/id_rsa Username:docker}
	I0408 22:46:29.863076   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35313
	I0408 22:46:29.863587   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.863795   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38457
	I0408 22:46:29.864207   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.864226   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.864668   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.865289   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.865328   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.865572   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42479
	I0408 22:46:29.865970   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.866061   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39215
	I0408 22:46:29.866195   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.866405   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.866422   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.866478   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.866840   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.866860   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.867282   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.867454   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.867500   16992 main.go:141] libmachine: (addons-355098) Calling .GetState
	I0408 22:46:29.867754   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.867772   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.868182   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.868622   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.868680   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.869032   16992 main.go:141] libmachine: (addons-355098) Calling .GetState
	I0408 22:46:29.869198   16992 main.go:141] libmachine: (addons-355098) Calling .DriverName
	I0408 22:46:29.871171   16992 main.go:141] libmachine: (addons-355098) Calling .DriverName
	I0408 22:46:29.871199   16992 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0408 22:46:29.872198   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32843
	I0408 22:46:29.872474   16992 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0408 22:46:29.872503   16992 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0408 22:46:29.872527   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHHostname
	I0408 22:46:29.872475   16992 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0408 22:46:29.872769   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.873279   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.873306   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.873677   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.873840   16992 main.go:141] libmachine: (addons-355098) Calling .GetState
	I0408 22:46:29.874085   16992 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0408 22:46:29.874108   16992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0408 22:46:29.874126   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHHostname
	I0408 22:46:29.875532   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43509
	I0408 22:46:29.876151   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.876488   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:29.876606   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.876617   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.877013   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.877118   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:29.877153   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:29.877335   16992 main.go:141] libmachine: (addons-355098) Calling .GetState
	I0408 22:46:29.877393   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHPort
	I0408 22:46:29.877564   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:29.878113   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36689
	I0408 22:46:29.878226   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHUsername
	I0408 22:46:29.878401   16992 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/addons-355098/id_rsa Username:docker}
	I0408 22:46:29.878835   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.879401   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33851
	I0408 22:46:29.879570   16992 main.go:141] libmachine: (addons-355098) Calling .DriverName
	I0408 22:46:29.879626   16992 main.go:141] libmachine: (addons-355098) Calling .DriverName
	I0408 22:46:29.879325   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.879724   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.879749   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:29.879761   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:29.881094   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.881231   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.881288   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:29.881306   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:29.881321   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:29.881337   16992 main.go:141] libmachine: (addons-355098) DBG | Closing plugin on server side
	I0408 22:46:29.881349   16992 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0408 22:46:29.881357   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:29.881388   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHPort
	I0408 22:46:29.881367   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:29.881433   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:29.881439   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:29.881743   16992 main.go:141] libmachine: (addons-355098) DBG | Closing plugin on server side
	I0408 22:46:29.881766   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:29.881773   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	W0408 22:46:29.881863   16992 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0408 22:46:29.882009   16992 main.go:141] libmachine: (addons-355098) Calling .GetState
	I0408 22:46:29.882079   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:29.882602   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43013
	I0408 22:46:29.882718   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44827
	I0408 22:46:29.882796   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHUsername
	I0408 22:46:29.882958   16992 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/addons-355098/id_rsa Username:docker}
	I0408 22:46:29.883337   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.883460   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.883594   16992 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0408 22:46:29.884042   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.884065   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.884594   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.884716   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.884744   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.884758   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.885406   16992 main.go:141] libmachine: (addons-355098) Calling .DriverName
	I0408 22:46:29.885540   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.885667   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.885881   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.885955   16992 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0408 22:46:29.886064   16992 main.go:141] libmachine: (addons-355098) Calling .GetState
	I0408 22:46:29.886328   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.886361   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.887072   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33719
	I0408 22:46:29.888071   16992 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0408 22:46:29.888100   16992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0408 22:46:29.888120   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHHostname
	I0408 22:46:29.888071   16992 main.go:141] libmachine: (addons-355098) Calling .DriverName
	I0408 22:46:29.888216   16992 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0408 22:46:29.888239   16992 main.go:141] libmachine: (addons-355098) Calling .DriverName
	I0408 22:46:29.888326   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.889981   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37677
	I0408 22:46:29.890336   16992 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0408 22:46:29.890362   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.890813   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.890834   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.891303   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.891551   16992 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0408 22:46:29.891572   16992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0408 22:46:29.891591   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHHostname
	I0408 22:46:29.891659   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:29.891825   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.891842   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.892203   16992 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0408 22:46:29.892307   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:29.892324   16992 main.go:141] libmachine: (addons-355098) Calling .GetState
	I0408 22:46:29.892339   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:29.892505   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHPort
	I0408 22:46:29.892646   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:29.892770   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHUsername
	I0408 22:46:29.892829   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.892878   16992 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/addons-355098/id_rsa Username:docker}
	I0408 22:46:29.893339   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36855
	I0408 22:46:29.893593   16992 main.go:141] libmachine: (addons-355098) Calling .GetState
	I0408 22:46:29.893843   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.894887   16992 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0408 22:46:29.895110   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.895135   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.895588   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.896520   16992 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-355098"
	I0408 22:46:29.896574   16992 host.go:66] Checking if "addons-355098" exists ...
	I0408 22:46:29.896947   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.896986   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.897242   16992 main.go:141] libmachine: (addons-355098) Calling .DriverName
	I0408 22:46:29.897283   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHPort
	I0408 22:46:29.897304   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:29.897330   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:29.897345   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:29.897451   16992 main.go:141] libmachine: (addons-355098) Calling .GetState
	I0408 22:46:29.897460   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:29.897585   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHUsername
	I0408 22:46:29.897475   16992 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0408 22:46:29.897766   16992 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/addons-355098/id_rsa Username:docker}
	I0408 22:46:29.898821   16992 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0408 22:46:29.898791   16992 main.go:141] libmachine: (addons-355098) Calling .DriverName
	I0408 22:46:29.900371   16992 out.go:177]   - Using image docker.io/registry:2.8.3
	I0408 22:46:29.900523   16992 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.31
	I0408 22:46:29.901562   16992 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0408 22:46:29.901580   16992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0408 22:46:29.901598   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHHostname
	I0408 22:46:29.901612   16992 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0408 22:46:29.901627   16992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0408 22:46:29.901643   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHHostname
	I0408 22:46:29.903930   16992 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0408 22:46:29.904618   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:29.905176   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:29.905204   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:29.905379   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHPort
	I0408 22:46:29.905557   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:29.905694   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHUsername
	I0408 22:46:29.905831   16992 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/addons-355098/id_rsa Username:docker}
	I0408 22:46:29.906279   16992 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0408 22:46:29.907306   16992 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0408 22:46:29.907578   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:29.908512   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:29.908535   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:29.908726   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHPort
	I0408 22:46:29.908912   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:29.909072   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHUsername
	I0408 22:46:29.909138   16992 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0408 22:46:29.909232   16992 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/addons-355098/id_rsa Username:docker}
	I0408 22:46:29.910348   16992 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0408 22:46:29.910366   16992 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0408 22:46:29.910384   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHHostname
	I0408 22:46:29.914198   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:29.914253   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40885
	I0408 22:46:29.914649   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.914881   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:29.914909   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:29.915049   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHPort
	I0408 22:46:29.915196   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:29.915206   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.915232   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.915373   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHUsername
	I0408 22:46:29.915535   16992 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/addons-355098/id_rsa Username:docker}
	I0408 22:46:29.915587   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.915754   16992 main.go:141] libmachine: (addons-355098) Calling .GetState
	I0408 22:46:29.917517   16992 main.go:141] libmachine: (addons-355098) Calling .DriverName
	I0408 22:46:29.917703   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46393
	I0408 22:46:29.918023   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.918519   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.918542   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.918953   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.919082   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35973
	I0408 22:46:29.919093   16992 main.go:141] libmachine: (addons-355098) Calling .GetState
	I0408 22:46:29.919367   16992 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0408 22:46:29.919787   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.920300   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.920320   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.920609   16992 main.go:141] libmachine: (addons-355098) Calling .DriverName
	I0408 22:46:29.920648   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.920668   16992 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0408 22:46:29.920685   16992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0408 22:46:29.920709   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHHostname
	I0408 22:46:29.920807   16992 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 22:46:29.920821   16992 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 22:46:29.920837   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHHostname
	I0408 22:46:29.921670   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:29.921725   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:29.924207   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:29.924361   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:29.924670   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:29.924690   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:29.924866   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHPort
	I0408 22:46:29.924933   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:29.924948   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:29.924974   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHPort
	I0408 22:46:29.925120   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:29.925163   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:29.925226   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHUsername
	I0408 22:46:29.925349   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHUsername
	I0408 22:46:29.925345   16992 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/addons-355098/id_rsa Username:docker}
	I0408 22:46:29.925498   16992 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/addons-355098/id_rsa Username:docker}
	W0408 22:46:29.926116   16992 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:40166->192.168.39.199:22: read: connection reset by peer
	I0408 22:46:29.926145   16992 retry.go:31] will retry after 316.963774ms: ssh: handshake failed: read tcp 192.168.39.1:40166->192.168.39.199:22: read: connection reset by peer
	I0408 22:46:29.939653   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39517
	I0408 22:46:29.940152   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:29.940633   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:29.940677   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:29.941012   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:29.941165   16992 main.go:141] libmachine: (addons-355098) Calling .GetState
	I0408 22:46:29.942705   16992 main.go:141] libmachine: (addons-355098) Calling .DriverName
	I0408 22:46:29.944376   16992 out.go:177]   - Using image docker.io/busybox:stable
	I0408 22:46:29.945653   16992 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0408 22:46:29.946842   16992 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0408 22:46:29.946865   16992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0408 22:46:29.946885   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHHostname
	I0408 22:46:29.950208   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:29.950652   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:29.950682   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:29.950809   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHPort
	I0408 22:46:29.950982   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:29.951133   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHUsername
	I0408 22:46:29.951219   16992 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/addons-355098/id_rsa Username:docker}
	I0408 22:46:30.227586   16992 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0408 22:46:30.227607   16992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0408 22:46:30.274066   16992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0408 22:46:30.286130   16992 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0408 22:46:30.286171   16992 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0408 22:46:30.339275   16992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0408 22:46:30.342388   16992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0408 22:46:30.351675   16992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0408 22:46:30.356238   16992 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0408 22:46:30.356259   16992 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0408 22:46:30.363748   16992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0408 22:46:30.374257   16992 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0408 22:46:30.374274   16992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0408 22:46:30.395286   16992 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0408 22:46:30.395309   16992 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0408 22:46:30.405066   16992 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 22:46:30.405084   16992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0408 22:46:30.426408   16992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0408 22:46:30.429450   16992 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0408 22:46:30.429468   16992 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0408 22:46:30.503393   16992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 22:46:30.505222   16992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0408 22:46:30.513728   16992 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0408 22:46:30.513747   16992 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0408 22:46:30.514245   16992 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0408 22:46:30.514261   16992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0408 22:46:30.539167   16992 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0408 22:46:30.539196   16992 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0408 22:46:30.549829   16992 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0408 22:46:30.549850   16992 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0408 22:46:30.650556   16992 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 22:46:30.650586   16992 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0408 22:46:30.679451   16992 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0408 22:46:30.679477   16992 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0408 22:46:30.691562   16992 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0408 22:46:30.691582   16992 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0408 22:46:30.768586   16992 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0408 22:46:30.768623   16992 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0408 22:46:30.769463   16992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0408 22:46:30.814640   16992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 22:46:30.859461   16992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 22:46:30.866453   16992 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0408 22:46:30.866479   16992 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0408 22:46:30.938732   16992 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0408 22:46:30.938763   16992 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0408 22:46:30.976925   16992 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0408 22:46:30.976956   16992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0408 22:46:31.060452   16992 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0408 22:46:31.060479   16992 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0408 22:46:31.063770   16992 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0408 22:46:31.063798   16992 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0408 22:46:31.129348   16992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0408 22:46:31.347415   16992 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0408 22:46:31.347439   16992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0408 22:46:31.389958   16992 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0408 22:46:31.389990   16992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0408 22:46:31.577820   16992 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0408 22:46:31.577850   16992 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0408 22:46:31.789828   16992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0408 22:46:31.793488   16992 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0408 22:46:31.793511   16992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0408 22:46:32.030307   16992 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0408 22:46:32.030337   16992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0408 22:46:32.305551   16992 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0408 22:46:32.305572   16992 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0408 22:46:32.523447   16992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0408 22:46:36.715578   16992 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0408 22:46:36.715634   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHHostname
	I0408 22:46:36.718704   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:36.719189   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:36.719225   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:36.719449   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHPort
	I0408 22:46:36.719678   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:36.719881   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHUsername
	I0408 22:46:36.720046   16992 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/addons-355098/id_rsa Username:docker}
	I0408 22:46:37.011412   16992 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0408 22:46:37.131788   16992 addons.go:238] Setting addon gcp-auth=true in "addons-355098"
	I0408 22:46:37.131859   16992 host.go:66] Checking if "addons-355098" exists ...
	I0408 22:46:37.132374   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:37.132424   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:37.148261   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42783
	I0408 22:46:37.148788   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:37.149293   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:37.149315   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:37.149650   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:37.150288   16992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:46:37.150346   16992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:46:37.165866   16992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44297
	I0408 22:46:37.166309   16992 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:46:37.166727   16992 main.go:141] libmachine: Using API Version  1
	I0408 22:46:37.166752   16992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:46:37.167066   16992 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:46:37.167254   16992 main.go:141] libmachine: (addons-355098) Calling .GetState
	I0408 22:46:37.168947   16992 main.go:141] libmachine: (addons-355098) Calling .DriverName
	I0408 22:46:37.169146   16992 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0408 22:46:37.169168   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHHostname
	I0408 22:46:37.171810   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:37.172242   16992 main.go:141] libmachine: (addons-355098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:e2:e3", ip: ""} in network mk-addons-355098: {Iface:virbr1 ExpiryTime:2025-04-08 23:45:58 +0000 UTC Type:0 Mac:52:54:00:ca:e2:e3 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:addons-355098 Clientid:01:52:54:00:ca:e2:e3}
	I0408 22:46:37.172266   16992 main.go:141] libmachine: (addons-355098) DBG | domain addons-355098 has defined IP address 192.168.39.199 and MAC address 52:54:00:ca:e2:e3 in network mk-addons-355098
	I0408 22:46:37.172424   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHPort
	I0408 22:46:37.172587   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHKeyPath
	I0408 22:46:37.172736   16992 main.go:141] libmachine: (addons-355098) Calling .GetSSHUsername
	I0408 22:46:37.172953   16992 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/addons-355098/id_rsa Username:docker}
	I0408 22:46:37.621096   16992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.346993682s)
	I0408 22:46:37.621149   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:37.621160   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:37.621188   16992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.281878539s)
	I0408 22:46:37.621232   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:37.621260   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:37.621274   16992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.278858359s)
	I0408 22:46:37.621296   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:37.621308   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:37.621343   16992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.269633297s)
	I0408 22:46:37.621391   16992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.257625271s)
	I0408 22:46:37.621415   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:37.621420   16992 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.216317905s)
	I0408 22:46:37.621436   16992 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0408 22:46:37.621570   16992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.195137s)
	I0408 22:46:37.621594   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:37.621604   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:37.621490   16992 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.216386738s)
	I0408 22:46:37.621686   16992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.118273746s)
	I0408 22:46:37.621424   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:37.621724   16992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.116483875s)
	I0408 22:46:37.621737   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:37.621700   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:37.621747   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:37.621755   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:37.621805   16992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.852316233s)
	I0408 22:46:37.621820   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:37.621829   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:37.621878   16992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.807214923s)
	I0408 22:46:37.621892   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:37.621900   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:37.621988   16992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.762495171s)
	I0408 22:46:37.622003   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:37.622012   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:37.622084   16992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.492709004s)
	I0408 22:46:37.622098   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:37.622107   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:37.622235   16992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.83237577s)
	W0408 22:46:37.622259   16992 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0408 22:46:37.622282   16992 retry.go:31] will retry after 141.770706ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0408 22:46:37.622862   16992 node_ready.go:35] waiting up to 6m0s for node "addons-355098" to be "Ready" ...
	I0408 22:46:37.624418   16992 main.go:141] libmachine: (addons-355098) DBG | Closing plugin on server side
	I0408 22:46:37.624423   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:37.624439   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:37.624447   16992 main.go:141] libmachine: (addons-355098) DBG | Closing plugin on server side
	I0408 22:46:37.624448   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:37.624460   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:37.624537   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:37.624562   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:37.624570   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:37.624577   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:37.624710   16992 main.go:141] libmachine: (addons-355098) DBG | Closing plugin on server side
	I0408 22:46:37.624763   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:37.624771   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:37.624948   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:37.624956   16992 main.go:141] libmachine: (addons-355098) DBG | Closing plugin on server side
	I0408 22:46:37.624961   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:37.625001   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:37.626331   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:37.625043   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:37.626389   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:37.626398   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:37.626404   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:37.625064   16992 main.go:141] libmachine: (addons-355098) DBG | Closing plugin on server side
	I0408 22:46:37.625057   16992 main.go:141] libmachine: (addons-355098) DBG | Closing plugin on server side
	I0408 22:46:37.625086   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:37.626444   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:37.626452   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:37.626458   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:37.625102   16992 main.go:141] libmachine: (addons-355098) DBG | Closing plugin on server side
	I0408 22:46:37.625105   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:37.626492   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:37.626499   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:37.626506   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:37.626506   16992 main.go:141] libmachine: (addons-355098) DBG | Closing plugin on server side
	I0408 22:46:37.626535   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:37.626542   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:37.626549   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:37.626555   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:37.625118   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:37.626587   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:37.626615   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:37.626624   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:37.626691   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:37.626701   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:37.625123   16992 main.go:141] libmachine: (addons-355098) DBG | Closing plugin on server side
	I0408 22:46:37.626836   16992 main.go:141] libmachine: (addons-355098) DBG | Closing plugin on server side
	I0408 22:46:37.625129   16992 main.go:141] libmachine: (addons-355098) DBG | Closing plugin on server side
	I0408 22:46:37.626865   16992 main.go:141] libmachine: (addons-355098) DBG | Closing plugin on server side
	I0408 22:46:37.625141   16992 main.go:141] libmachine: (addons-355098) DBG | Closing plugin on server side
	I0408 22:46:37.625146   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:37.626901   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:37.626911   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:37.626919   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:37.626936   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:37.626949   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:37.625150   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:37.626959   16992 addons.go:479] Verifying addon metrics-server=true in "addons-355098"
	I0408 22:46:37.626968   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:37.625158   16992 main.go:141] libmachine: (addons-355098) DBG | Closing plugin on server side
	I0408 22:46:37.625165   16992 main.go:141] libmachine: (addons-355098) DBG | Closing plugin on server side
	I0408 22:46:37.625175   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:37.625181   16992 main.go:141] libmachine: (addons-355098) DBG | Closing plugin on server side
	I0408 22:46:37.625183   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:37.625155   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:37.626995   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:37.627003   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:37.626996   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:37.627012   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:37.627016   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:37.627020   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:37.627020   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:37.627031   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:37.626994   16992 main.go:141] libmachine: (addons-355098) DBG | Closing plugin on server side
	I0408 22:46:37.627089   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:37.627098   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:37.627104   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:37.627126   16992 main.go:141] libmachine: (addons-355098) DBG | Closing plugin on server side
	I0408 22:46:37.627004   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:37.627149   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:37.627156   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:37.627022   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:37.627163   16992 addons.go:479] Verifying addon registry=true in "addons-355098"
	I0408 22:46:37.627313   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:37.627320   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:37.626779   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:37.627348   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:37.628305   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:37.628320   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:37.628443   16992 main.go:141] libmachine: (addons-355098) DBG | Closing plugin on server side
	I0408 22:46:37.628463   16992 main.go:141] libmachine: (addons-355098) DBG | Closing plugin on server side
	I0408 22:46:37.628779   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:37.628790   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:37.628800   16992 addons.go:479] Verifying addon ingress=true in "addons-355098"
	I0408 22:46:37.628984   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:37.628994   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:37.629026   16992 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-355098 service yakd-dashboard -n yakd-dashboard
	
	I0408 22:46:37.627291   16992 main.go:141] libmachine: (addons-355098) DBG | Closing plugin on server side
	I0408 22:46:37.630203   16992 main.go:141] libmachine: (addons-355098) DBG | Closing plugin on server side
	I0408 22:46:37.630325   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:37.630333   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:37.630376   16992 out.go:177] * Verifying registry addon...
	I0408 22:46:37.630414   16992 out.go:177] * Verifying ingress addon...
	I0408 22:46:37.632576   16992 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0408 22:46:37.632773   16992 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0408 22:46:37.640558   16992 node_ready.go:49] node "addons-355098" has status "Ready":"True"
	I0408 22:46:37.640585   16992 node_ready.go:38] duration metric: took 17.706577ms for node "addons-355098" to be "Ready" ...
	I0408 22:46:37.640597   16992 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 22:46:37.654009   16992 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0408 22:46:37.654038   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:37.669601   16992 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0408 22:46:37.669620   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:37.670089   16992 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-bfpvp" in "kube-system" namespace to be "Ready" ...
	I0408 22:46:37.695155   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:37.695175   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:37.695554   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:37.695576   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:37.701565   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:37.701585   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:37.701811   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:37.701834   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:37.764486   16992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0408 22:46:38.125514   16992 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-355098" context rescaled to 1 replicas
	I0408 22:46:38.139602   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:38.139657   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:38.650516   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:38.650644   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:38.736326   16992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.212825338s)
	I0408 22:46:38.736391   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:38.736403   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:38.736416   16992 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.567247612s)
	I0408 22:46:38.736743   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:38.736759   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:38.736768   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:38.736775   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:38.737023   16992 main.go:141] libmachine: (addons-355098) DBG | Closing plugin on server side
	I0408 22:46:38.737101   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:38.737117   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:38.737129   16992 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-355098"
	I0408 22:46:38.738016   16992 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0408 22:46:38.739020   16992 out.go:177] * Verifying csi-hostpath-driver addon...
	I0408 22:46:38.740659   16992 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0408 22:46:38.741433   16992 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0408 22:46:38.741448   16992 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0408 22:46:38.741641   16992 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0408 22:46:38.779601   16992 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0408 22:46:38.779622   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:38.783252   16992 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0408 22:46:38.783278   16992 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0408 22:46:38.840249   16992 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0408 22:46:38.840271   16992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0408 22:46:38.911942   16992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0408 22:46:39.136995   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:39.137491   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:39.245661   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:39.635663   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:39.636085   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:39.675203   16992 pod_ready.go:103] pod "amd-gpu-device-plugin-bfpvp" in "kube-system" namespace has status "Ready":"False"
	I0408 22:46:39.740310   16992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.975769207s)
	I0408 22:46:39.740427   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:39.740444   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:39.740776   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:39.740797   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:39.740819   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:39.740827   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:39.741100   16992 main.go:141] libmachine: (addons-355098) DBG | Closing plugin on server side
	I0408 22:46:39.741141   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:39.741147   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:39.745625   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:40.162236   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:40.162253   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:40.279485   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:40.316936   16992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.40494758s)
	I0408 22:46:40.316990   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:40.316998   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:40.317300   16992 main.go:141] libmachine: (addons-355098) DBG | Closing plugin on server side
	I0408 22:46:40.317335   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:40.317371   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:40.317380   16992 main.go:141] libmachine: Making call to close driver server
	I0408 22:46:40.317387   16992 main.go:141] libmachine: (addons-355098) Calling .Close
	I0408 22:46:40.317698   16992 main.go:141] libmachine: (addons-355098) DBG | Closing plugin on server side
	I0408 22:46:40.317719   16992 main.go:141] libmachine: Successfully made call to close driver server
	I0408 22:46:40.317732   16992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 22:46:40.318655   16992 addons.go:479] Verifying addon gcp-auth=true in "addons-355098"
	I0408 22:46:40.319921   16992 out.go:177] * Verifying gcp-auth addon...
	I0408 22:46:40.322081   16992 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0408 22:46:40.347989   16992 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0408 22:46:40.348016   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:40.635892   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:40.636282   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:40.745633   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:40.824865   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:41.136699   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:41.136882   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:41.245997   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:41.345999   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:41.635860   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:41.636382   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:41.745794   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:41.825227   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:42.138024   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:42.138132   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:42.176308   16992 pod_ready.go:103] pod "amd-gpu-device-plugin-bfpvp" in "kube-system" namespace has status "Ready":"False"
	I0408 22:46:42.245598   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:42.325303   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:42.636782   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:42.636905   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:42.745373   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:42.825257   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:43.136620   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:43.139287   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:43.244596   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:43.325421   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:43.637299   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:43.637352   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:43.745319   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:43.845900   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:44.136819   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:44.136866   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:44.176685   16992 pod_ready.go:103] pod "amd-gpu-device-plugin-bfpvp" in "kube-system" namespace has status "Ready":"False"
	I0408 22:46:44.245722   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:44.325293   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:44.636538   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:44.636595   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:44.744685   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:44.825726   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:45.136696   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:45.137114   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:45.601031   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:45.601535   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:45.700585   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:45.700721   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:45.745555   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:45.825620   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:46.136531   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:46.136540   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:46.245450   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:46.325251   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:46.636587   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:46.638334   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:46.674843   16992 pod_ready.go:103] pod "amd-gpu-device-plugin-bfpvp" in "kube-system" namespace has status "Ready":"False"
	I0408 22:46:46.745012   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:46.825727   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:47.136623   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:47.136633   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:47.248724   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:47.325498   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:47.862617   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:47.862811   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:47.863257   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:47.863939   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:48.138455   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:48.138738   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:48.245804   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:48.324797   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:48.636106   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:48.636403   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:48.675450   16992 pod_ready.go:103] pod "amd-gpu-device-plugin-bfpvp" in "kube-system" namespace has status "Ready":"False"
	I0408 22:46:48.745328   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:48.824819   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:49.136328   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:49.136561   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:49.244581   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:49.325399   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:49.636060   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:49.636110   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:49.746513   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:49.825528   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:50.296402   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:50.298478   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:50.299075   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:50.326117   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:50.636714   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:50.636807   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:50.678706   16992 pod_ready.go:103] pod "amd-gpu-device-plugin-bfpvp" in "kube-system" namespace has status "Ready":"False"
	I0408 22:46:50.744636   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:50.825837   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:51.136881   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:51.137012   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:51.244900   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:51.325421   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:51.639473   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:51.639511   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:51.745967   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:51.826178   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:52.136319   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:52.136736   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:52.245408   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:52.324912   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:52.636468   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:52.636496   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:52.744434   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:52.825049   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:53.135957   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:53.135974   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:53.176989   16992 pod_ready.go:103] pod "amd-gpu-device-plugin-bfpvp" in "kube-system" namespace has status "Ready":"False"
	I0408 22:46:53.244899   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:53.324837   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:53.636595   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:53.636599   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:53.744996   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:53.826252   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:54.136940   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:54.137061   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:54.244757   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:54.325269   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:54.636994   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:54.637012   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:54.745073   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:54.825040   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:55.136085   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:55.136228   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:55.245319   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:55.325119   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:55.636398   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:55.636514   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:55.675083   16992 pod_ready.go:103] pod "amd-gpu-device-plugin-bfpvp" in "kube-system" namespace has status "Ready":"False"
	I0408 22:46:55.744956   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:55.826848   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:56.488677   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:56.488724   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:56.488974   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:56.489267   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:56.637586   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:56.637662   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:56.745650   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:56.825547   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:57.135839   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:57.135914   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:57.245352   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:57.324880   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:57.636462   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:57.636570   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:57.675197   16992 pod_ready.go:103] pod "amd-gpu-device-plugin-bfpvp" in "kube-system" namespace has status "Ready":"False"
	I0408 22:46:57.745196   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:57.825411   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:58.135901   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:58.136286   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:58.245542   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:58.325457   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:58.636082   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:58.636325   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:58.744858   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:58.832974   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:59.139972   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:59.141010   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:59.245556   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:59.327323   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:46:59.635501   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:46:59.635827   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:46:59.675691   16992 pod_ready.go:103] pod "amd-gpu-device-plugin-bfpvp" in "kube-system" namespace has status "Ready":"False"
	I0408 22:46:59.746018   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:46:59.825101   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:00.135807   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:00.135908   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:00.245663   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:00.325363   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:00.637773   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:00.637921   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:00.745438   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:00.825002   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:01.136628   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:01.137447   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:01.245702   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:01.325943   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:01.636941   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:01.637097   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:01.746440   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:01.825184   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:02.137194   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:02.137212   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:02.175002   16992 pod_ready.go:93] pod "amd-gpu-device-plugin-bfpvp" in "kube-system" namespace has status "Ready":"True"
	I0408 22:47:02.175024   16992 pod_ready.go:82] duration metric: took 24.504914016s for pod "amd-gpu-device-plugin-bfpvp" in "kube-system" namespace to be "Ready" ...
	I0408 22:47:02.175037   16992 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-jqg2m" in "kube-system" namespace to be "Ready" ...
	I0408 22:47:02.176842   16992 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-jqg2m" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-jqg2m" not found
	I0408 22:47:02.176863   16992 pod_ready.go:82] duration metric: took 1.818458ms for pod "coredns-668d6bf9bc-jqg2m" in "kube-system" namespace to be "Ready" ...
	E0408 22:47:02.176874   16992 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-jqg2m" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-jqg2m" not found
	I0408 22:47:02.176882   16992 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-tmwrn" in "kube-system" namespace to be "Ready" ...
	I0408 22:47:02.180274   16992 pod_ready.go:93] pod "coredns-668d6bf9bc-tmwrn" in "kube-system" namespace has status "Ready":"True"
	I0408 22:47:02.180290   16992 pod_ready.go:82] duration metric: took 3.400347ms for pod "coredns-668d6bf9bc-tmwrn" in "kube-system" namespace to be "Ready" ...
	I0408 22:47:02.180300   16992 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-355098" in "kube-system" namespace to be "Ready" ...
	I0408 22:47:02.183833   16992 pod_ready.go:93] pod "etcd-addons-355098" in "kube-system" namespace has status "Ready":"True"
	I0408 22:47:02.183851   16992 pod_ready.go:82] duration metric: took 3.54383ms for pod "etcd-addons-355098" in "kube-system" namespace to be "Ready" ...
	I0408 22:47:02.183860   16992 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-355098" in "kube-system" namespace to be "Ready" ...
	I0408 22:47:02.187687   16992 pod_ready.go:93] pod "kube-apiserver-addons-355098" in "kube-system" namespace has status "Ready":"True"
	I0408 22:47:02.187705   16992 pod_ready.go:82] duration metric: took 3.82201ms for pod "kube-apiserver-addons-355098" in "kube-system" namespace to be "Ready" ...
	I0408 22:47:02.187714   16992 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-355098" in "kube-system" namespace to be "Ready" ...
	I0408 22:47:02.245050   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:02.325519   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:02.373004   16992 pod_ready.go:93] pod "kube-controller-manager-addons-355098" in "kube-system" namespace has status "Ready":"True"
	I0408 22:47:02.373027   16992 pod_ready.go:82] duration metric: took 185.304421ms for pod "kube-controller-manager-addons-355098" in "kube-system" namespace to be "Ready" ...
	I0408 22:47:02.373040   16992 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-t88l4" in "kube-system" namespace to be "Ready" ...
	I0408 22:47:02.637407   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:02.637606   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:02.745846   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:02.773085   16992 pod_ready.go:93] pod "kube-proxy-t88l4" in "kube-system" namespace has status "Ready":"True"
	I0408 22:47:02.773113   16992 pod_ready.go:82] duration metric: took 400.064172ms for pod "kube-proxy-t88l4" in "kube-system" namespace to be "Ready" ...
	I0408 22:47:02.773126   16992 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-355098" in "kube-system" namespace to be "Ready" ...
	I0408 22:47:02.825625   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:03.136682   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:03.136889   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:03.173627   16992 pod_ready.go:93] pod "kube-scheduler-addons-355098" in "kube-system" namespace has status "Ready":"True"
	I0408 22:47:03.173654   16992 pod_ready.go:82] duration metric: took 400.512156ms for pod "kube-scheduler-addons-355098" in "kube-system" namespace to be "Ready" ...
	I0408 22:47:03.173667   16992 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-7fbb699795-4f8hj" in "kube-system" namespace to be "Ready" ...
	I0408 22:47:03.245532   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:03.325854   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:03.636878   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:03.637623   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:03.744893   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:03.828742   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:04.139427   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:04.139611   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:04.246331   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:04.324874   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:04.637642   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:04.637767   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:04.749599   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:04.825812   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:05.136994   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:05.137174   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:05.179180   16992 pod_ready.go:93] pod "metrics-server-7fbb699795-4f8hj" in "kube-system" namespace has status "Ready":"True"
	I0408 22:47:05.179204   16992 pod_ready.go:82] duration metric: took 2.005528768s for pod "metrics-server-7fbb699795-4f8hj" in "kube-system" namespace to be "Ready" ...
	I0408 22:47:05.179214   16992 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-rxkbz" in "kube-system" namespace to be "Ready" ...
	I0408 22:47:05.245150   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:05.325694   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:05.573306   16992 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-rxkbz" in "kube-system" namespace has status "Ready":"True"
	I0408 22:47:05.573325   16992 pod_ready.go:82] duration metric: took 394.1061ms for pod "nvidia-device-plugin-daemonset-rxkbz" in "kube-system" namespace to be "Ready" ...
	I0408 22:47:05.573344   16992 pod_ready.go:39] duration metric: took 27.932727379s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 22:47:05.573359   16992 api_server.go:52] waiting for apiserver process to appear ...
	I0408 22:47:05.573405   16992 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 22:47:05.590103   16992 api_server.go:72] duration metric: took 35.846389029s to wait for apiserver process to appear ...
	I0408 22:47:05.590133   16992 api_server.go:88] waiting for apiserver healthz status ...
	I0408 22:47:05.590153   16992 api_server.go:253] Checking apiserver healthz at https://192.168.39.199:8443/healthz ...
	I0408 22:47:05.595191   16992 api_server.go:279] https://192.168.39.199:8443/healthz returned 200:
	ok
	I0408 22:47:05.596177   16992 api_server.go:141] control plane version: v1.32.2
	I0408 22:47:05.596195   16992 api_server.go:131] duration metric: took 6.0562ms to wait for apiserver health ...
	I0408 22:47:05.596202   16992 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 22:47:05.635940   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:05.636536   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:05.745914   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:05.774478   16992 system_pods.go:59] 18 kube-system pods found
	I0408 22:47:05.774506   16992 system_pods.go:61] "amd-gpu-device-plugin-bfpvp" [24c4ff4e-c69c-4ca1-937e-01b646af664a] Running
	I0408 22:47:05.774511   16992 system_pods.go:61] "coredns-668d6bf9bc-tmwrn" [4a36a98a-dcbf-4fdc-b8cd-01bf6e34d08f] Running
	I0408 22:47:05.774517   16992 system_pods.go:61] "csi-hostpath-attacher-0" [cdc61e13-f1a1-452b-a6bc-bf8209d5dad5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0408 22:47:05.774522   16992 system_pods.go:61] "csi-hostpath-resizer-0" [d77f9f5f-452a-4be9-94f3-e2345d7b4f24] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0408 22:47:05.774529   16992 system_pods.go:61] "csi-hostpathplugin-j6wgl" [2d612896-e22e-4c45-a4e1-604fb6e6b85b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0408 22:47:05.774534   16992 system_pods.go:61] "etcd-addons-355098" [f0f3b539-bca5-40af-87b2-599c6de5cc93] Running
	I0408 22:47:05.774538   16992 system_pods.go:61] "kube-apiserver-addons-355098" [bb1e01b1-3f83-464e-a15a-f849707657ee] Running
	I0408 22:47:05.774542   16992 system_pods.go:61] "kube-controller-manager-addons-355098" [3fa775f1-ce3a-48e6-94bf-a60d303d78a6] Running
	I0408 22:47:05.774546   16992 system_pods.go:61] "kube-ingress-dns-minikube" [25c27a56-63e6-47c2-b868-3e39febb0bd3] Running
	I0408 22:47:05.774549   16992 system_pods.go:61] "kube-proxy-t88l4" [6f63d680-43c5-4442-b7e3-d67d63e08c91] Running
	I0408 22:47:05.774552   16992 system_pods.go:61] "kube-scheduler-addons-355098" [94ad73ea-95aa-4e5d-8580-d27a8a37356c] Running
	I0408 22:47:05.774555   16992 system_pods.go:61] "metrics-server-7fbb699795-4f8hj" [73c9a758-36b5-417d-acd3-24e45007b5ae] Running
	I0408 22:47:05.774559   16992 system_pods.go:61] "nvidia-device-plugin-daemonset-rxkbz" [7c7ef4d7-3bef-4311-b3c2-6811114c281e] Running
	I0408 22:47:05.774563   16992 system_pods.go:61] "registry-6c88467877-78d7v" [7430b453-dc57-4eca-89e7-132b388f3fb8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0408 22:47:05.774568   16992 system_pods.go:61] "registry-proxy-g8xx6" [baa26201-896c-4f4d-ac83-9ee192e0cb9f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0408 22:47:05.774578   16992 system_pods.go:61] "snapshot-controller-68b874b76f-lsd4m" [c2de5602-2ab4-40b9-9b5a-ed46145c609a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0408 22:47:05.774586   16992 system_pods.go:61] "snapshot-controller-68b874b76f-mhk89" [6b516a7e-d94a-431b-b7c3-b558c409e891] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0408 22:47:05.774589   16992 system_pods.go:61] "storage-provisioner" [b9963b95-5780-4a2a-ac16-7d54faa6fa97] Running
	I0408 22:47:05.774595   16992 system_pods.go:74] duration metric: took 178.388309ms to wait for pod list to return data ...
	I0408 22:47:05.774604   16992 default_sa.go:34] waiting for default service account to be created ...
	I0408 22:47:05.824862   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:05.973733   16992 default_sa.go:45] found service account: "default"
	I0408 22:47:05.973759   16992 default_sa.go:55] duration metric: took 199.149425ms for default service account to be created ...
	I0408 22:47:05.973767   16992 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 22:47:06.136053   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:06.136759   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:06.174340   16992 system_pods.go:86] 18 kube-system pods found
	I0408 22:47:06.174366   16992 system_pods.go:89] "amd-gpu-device-plugin-bfpvp" [24c4ff4e-c69c-4ca1-937e-01b646af664a] Running
	I0408 22:47:06.174372   16992 system_pods.go:89] "coredns-668d6bf9bc-tmwrn" [4a36a98a-dcbf-4fdc-b8cd-01bf6e34d08f] Running
	I0408 22:47:06.174378   16992 system_pods.go:89] "csi-hostpath-attacher-0" [cdc61e13-f1a1-452b-a6bc-bf8209d5dad5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0408 22:47:06.174383   16992 system_pods.go:89] "csi-hostpath-resizer-0" [d77f9f5f-452a-4be9-94f3-e2345d7b4f24] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0408 22:47:06.174390   16992 system_pods.go:89] "csi-hostpathplugin-j6wgl" [2d612896-e22e-4c45-a4e1-604fb6e6b85b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0408 22:47:06.174395   16992 system_pods.go:89] "etcd-addons-355098" [f0f3b539-bca5-40af-87b2-599c6de5cc93] Running
	I0408 22:47:06.174399   16992 system_pods.go:89] "kube-apiserver-addons-355098" [bb1e01b1-3f83-464e-a15a-f849707657ee] Running
	I0408 22:47:06.174402   16992 system_pods.go:89] "kube-controller-manager-addons-355098" [3fa775f1-ce3a-48e6-94bf-a60d303d78a6] Running
	I0408 22:47:06.174407   16992 system_pods.go:89] "kube-ingress-dns-minikube" [25c27a56-63e6-47c2-b868-3e39febb0bd3] Running
	I0408 22:47:06.174410   16992 system_pods.go:89] "kube-proxy-t88l4" [6f63d680-43c5-4442-b7e3-d67d63e08c91] Running
	I0408 22:47:06.174413   16992 system_pods.go:89] "kube-scheduler-addons-355098" [94ad73ea-95aa-4e5d-8580-d27a8a37356c] Running
	I0408 22:47:06.174416   16992 system_pods.go:89] "metrics-server-7fbb699795-4f8hj" [73c9a758-36b5-417d-acd3-24e45007b5ae] Running
	I0408 22:47:06.174419   16992 system_pods.go:89] "nvidia-device-plugin-daemonset-rxkbz" [7c7ef4d7-3bef-4311-b3c2-6811114c281e] Running
	I0408 22:47:06.174423   16992 system_pods.go:89] "registry-6c88467877-78d7v" [7430b453-dc57-4eca-89e7-132b388f3fb8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0408 22:47:06.174427   16992 system_pods.go:89] "registry-proxy-g8xx6" [baa26201-896c-4f4d-ac83-9ee192e0cb9f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0408 22:47:06.174432   16992 system_pods.go:89] "snapshot-controller-68b874b76f-lsd4m" [c2de5602-2ab4-40b9-9b5a-ed46145c609a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0408 22:47:06.174441   16992 system_pods.go:89] "snapshot-controller-68b874b76f-mhk89" [6b516a7e-d94a-431b-b7c3-b558c409e891] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0408 22:47:06.174445   16992 system_pods.go:89] "storage-provisioner" [b9963b95-5780-4a2a-ac16-7d54faa6fa97] Running
	I0408 22:47:06.174455   16992 system_pods.go:126] duration metric: took 200.683266ms to wait for k8s-apps to be running ...
	I0408 22:47:06.174461   16992 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 22:47:06.174512   16992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 22:47:06.208730   16992 system_svc.go:56] duration metric: took 34.260449ms WaitForService to wait for kubelet
	I0408 22:47:06.208754   16992 kubeadm.go:582] duration metric: took 36.465043807s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 22:47:06.208770   16992 node_conditions.go:102] verifying NodePressure condition ...
	I0408 22:47:06.244861   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:06.325349   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:06.372836   16992 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 22:47:06.372869   16992 node_conditions.go:123] node cpu capacity is 2
	I0408 22:47:06.372887   16992 node_conditions.go:105] duration metric: took 164.111727ms to run NodePressure ...
	I0408 22:47:06.372901   16992 start.go:241] waiting for startup goroutines ...
	I0408 22:47:06.636979   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:06.637014   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:06.745433   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:06.825093   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:07.136382   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:07.141916   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:07.263953   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:07.325874   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:07.636975   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:07.637028   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:07.745096   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:07.825492   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:08.135862   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:08.136063   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:08.245089   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:08.336309   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:08.882375   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:08.882525   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:08.882533   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:08.882738   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:09.136270   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:09.136437   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:09.245109   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:09.325815   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:09.639240   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:09.639503   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:09.745279   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:09.825376   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:10.137291   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:10.137433   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:10.246059   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:10.324962   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:10.636553   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:10.636600   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:10.746358   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:10.825275   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:11.136340   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:11.136475   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:11.246389   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:11.326179   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:11.637499   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:11.637507   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:11.745555   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:11.825203   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:12.137619   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:12.137773   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:12.245295   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:12.324801   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:12.636131   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:12.636275   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:12.745893   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:12.825705   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:13.136538   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:13.136613   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:13.244806   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:13.325682   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:13.636373   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:13.636855   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:13.745699   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:14.050456   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:14.138714   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:14.138918   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:14.244844   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:14.325519   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:14.636439   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:14.636454   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:14.745358   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:14.825280   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:15.137041   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:15.137068   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:15.245096   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:15.325699   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:15.636072   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:15.636216   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:15.745501   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:15.834588   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:16.136854   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:16.137929   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:16.245266   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:16.325925   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:16.636198   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:16.636291   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:16.745211   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:16.825687   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:17.137101   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:17.137243   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:17.244974   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:17.325652   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:17.637309   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:17.637327   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:17.745604   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:17.825513   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:18.135993   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:18.136174   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:18.245422   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:18.325691   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:18.636361   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:18.636471   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:18.745689   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:18.825092   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:19.339624   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:19.349771   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:19.350015   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:19.350184   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:19.636710   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:19.636729   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:19.744784   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:19.824573   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:20.137356   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:20.137369   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:20.245310   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:20.324615   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:20.639245   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:20.640830   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:20.746479   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:20.824977   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:21.136470   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:21.136690   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:21.244476   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:21.325262   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:21.636295   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:21.636385   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:21.745282   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:21.976634   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:22.136775   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:22.136840   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:22.245178   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:22.326170   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:22.641027   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:22.641181   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:22.744925   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:22.826071   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:23.136393   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:23.136537   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:23.245037   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:23.324893   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:23.636336   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:23.638858   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:23.745037   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:23.825839   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:24.136982   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:24.137016   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:24.244866   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:24.325481   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:24.636074   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:24.636175   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:24.745099   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:24.826593   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:25.137012   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:25.137099   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:25.244788   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:25.325111   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:25.640014   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:25.640389   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:25.746491   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:25.825223   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:26.137163   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:26.137963   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:26.245470   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:26.325034   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:26.636829   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:26.636856   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:26.745045   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:26.825985   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:27.137113   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:27.137160   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:27.245234   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:27.324862   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:27.636556   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:27.636771   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:27.744777   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:27.825281   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:28.136580   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:28.136754   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:28.244826   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:28.324741   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:28.636796   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:28.636914   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:28.744774   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:28.825921   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:29.136436   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:29.136621   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:29.245594   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:29.325086   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:29.638518   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 22:47:29.638742   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:29.744368   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:29.825599   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:30.137451   16992 kapi.go:107] duration metric: took 52.504869943s to wait for kubernetes.io/minikube-addons=registry ...
	I0408 22:47:30.137585   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:30.245569   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:30.325178   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:30.636153   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:30.745244   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:30.826787   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:31.471331   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:31.471360   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:31.471392   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:31.638624   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:31.746049   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:31.845698   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:32.141617   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:32.245553   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:32.325435   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:32.636604   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:32.745817   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:32.825548   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:33.136493   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:33.244434   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:33.325898   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:33.635717   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:33.744972   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:33.845032   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:34.136222   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:34.248029   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:34.325883   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:34.635698   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:34.745079   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:34.825268   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:35.136237   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:35.245657   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:35.324821   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:35.636267   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:35.745497   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:35.825270   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:36.140885   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:36.244843   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:36.325215   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:36.636919   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:37.159432   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:37.182970   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:37.183435   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:37.259353   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:37.359154   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:37.636695   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:37.745081   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:37.826155   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:38.136331   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:38.254491   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:38.325337   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:38.636510   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:38.745990   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:38.826401   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:39.136951   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:39.245783   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:39.346235   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:39.637482   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:39.745503   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:39.825602   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:40.136657   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:40.245490   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:40.330200   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:40.636620   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:40.745520   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:40.825517   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:41.136559   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:41.245713   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:41.325575   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:41.636592   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:41.746684   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:41.825648   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:42.137811   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:42.246285   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:42.326182   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:42.639086   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:42.745633   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:42.825672   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:43.138210   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:43.246057   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:43.325554   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:43.640080   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:43.746096   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:44.195292   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:44.195362   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:44.246966   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:44.326103   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:44.635827   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:44.744973   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:44.825538   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:45.136612   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:45.246407   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:45.324989   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:45.636670   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:45.745671   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:45.824856   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:46.136263   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:46.245442   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:46.325237   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:46.636625   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:46.745887   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:46.826060   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:47.136636   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:47.252499   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:47.709877   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:47.710101   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:47.746574   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:47.825287   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:48.136470   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:48.245976   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:48.325011   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:48.636535   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:48.745540   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:48.825300   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:49.136564   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:49.245944   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:49.325284   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:49.637209   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:50.088111   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:50.088779   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:50.136839   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:50.245075   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:50.346449   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:50.636027   16992 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 22:47:50.745154   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:50.825966   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:51.136470   16992 kapi.go:107] duration metric: took 1m13.503695085s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0408 22:47:51.245192   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:51.326351   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:51.748258   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:51.826790   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:52.245455   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:52.324840   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:52.745615   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:52.825288   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:53.244985   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:53.325333   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:53.745865   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:53.832176   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:54.245715   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:54.325327   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:54.745031   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:54.846526   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 22:47:55.245894   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:55.346582   16992 kapi.go:107] duration metric: took 1m15.024498674s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0408 22:47:55.348000   16992 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-355098 cluster.
	I0408 22:47:55.349006   16992 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0408 22:47:55.350107   16992 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0408 22:47:55.745119   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:56.245583   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:56.746732   16992 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 22:47:57.245900   16992 kapi.go:107] duration metric: took 1m18.504256671s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0408 22:47:57.247505   16992 out.go:177] * Enabled addons: amd-gpu-device-plugin, metrics-server, nvidia-device-plugin, ingress-dns, cloud-spanner, inspektor-gadget, storage-provisioner, yakd, default-storageclass, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0408 22:47:57.248681   16992 addons.go:514] duration metric: took 1m27.504921798s for enable addons: enabled=[amd-gpu-device-plugin metrics-server nvidia-device-plugin ingress-dns cloud-spanner inspektor-gadget storage-provisioner yakd default-storageclass storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0408 22:47:57.248726   16992 start.go:246] waiting for cluster config update ...
	I0408 22:47:57.248750   16992 start.go:255] writing updated cluster config ...
	I0408 22:47:57.249033   16992 ssh_runner.go:195] Run: rm -f paused
	I0408 22:47:57.300714   16992 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0408 22:47:57.302421   16992 out.go:177] * Done! kubectl is now configured to use "addons-355098" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 08 22:51:01 addons-355098 crio[657]: time="2025-04-08 22:51:01.372232932Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744152661372208573,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=708cfbe6-c138-4534-9b2c-d6082c27ca14 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 22:51:01 addons-355098 crio[657]: time="2025-04-08 22:51:01.373160472Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=70cd4930-a9ef-494f-afa4-38c45d1326c1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 22:51:01 addons-355098 crio[657]: time="2025-04-08 22:51:01.373485832Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=70cd4930-a9ef-494f-afa4-38c45d1326c1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 22:51:01 addons-355098 crio[657]: time="2025-04-08 22:51:01.374081426Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c651a539e21430b7764654d8975d7ab73797e909addb8629767da88aad00dda,PodSandboxId:18257259ac056a65afd2b93efd36ba7c4baa674acebe6e3f63ec60c260bcc89f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744152521071139981,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1016ffa1-7455-4039-9721-439abd6919c0,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6855963a12bd7f9318f63ef0d913582e7ae027c450f182b29acd35b84008774c,PodSandboxId:200ce6c0528523f000f3f7196b09bbd131409ea495e0c7f9cee93a8be81cdff1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744152481916451836,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2668e738-92e7-49ab-a3d2-53c61273973a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3315b29f88d5ce9574c944b9b87d69cd232ba0285315fbd1da60262c2c252a0,PodSandboxId:118519149c99aca9869342e7cfbf3e47f7e8b26d56e3138117a31dd1aa94296a,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744152470216483071,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-hwnwt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b3b3c0a9-cc76-4135-ae1d-e50226631b42,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5713f46d6ff35d56b9313f3d8970204ee78112b268decaddcc0114ebaaad76aa,PodSandboxId:c148492a3a87cc56f5c4ebac10a5bb49d1888eca0360a4f72420c99d4335ad4a,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1744152454095178159,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zxllf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51462bf5-2603-4e5c-9906-e8f37a42c965,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b352991442288dc1d356ca1dc7989fddeb2e53a89bbe03093193d00a88be354d,PodSandboxId:0a1927badfae075c5177c19c37f05d00050198eb47484636357e3b8caa2827a1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744152453690119200,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zcxjn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: dd061eae-8064-44be-b6e3-f089f7d217b5,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b072bfc9cfe231f55439c4f7edde6f2135c37e0a6b9247abe7133d905be482e6,PodSandboxId:f3dbfa3e3503a7efbc47aa5edde5fb01691707244f861687eb4da4d5dedb33a6,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744152421554966254,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-bfpvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c4ff4e-c69c-4ca1-937e-01b646af664a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59fd45c2e74e54699f4a73f7fe30d3c6486915920ac7fe45ad497db579194ac6,PodSandboxId:d8ddc5d089e0e004ce1d99fde59ca8b4b81eb0e121b820ce2dc10506275d4fe6,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744152418839942592,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25c27a56-63e6-47c2-b868-3e39febb0bd3,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75583b1f65ee42dd306335ff818c7e8b730c47e18fc71b0c255044af4dbc753b,PodSandboxId:157550ff8ef8bb1b8e454c5847744c17747b11c606693f330cd9c49736dee220,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744152395851653471,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9963b95-5780-4a2a-ac16-7d54faa6fa97,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc7527da264f870dc99f89f2b62f6f49685ff472fa4948ddd5c7f954e10f6cd2,PodSandboxId:23493d8925536b80d39c600c8c0ea7064d5647e527edd7d0629dd0d70c463e33,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744152393419925499,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-tmwrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a36a98a-dcbf-4fdc-b8cd-01bf6e34d08f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ee69dc5821040ff469e302848d70283ddb55ca2cb3f27b8b4609
a80685a41b3,PodSandboxId:beecf985e7fcfc6acac39b1b536df5418f082f0a372ade5fec77346622ea0ef0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744152390575732772,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t88l4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f63d680-43c5-4442-b7e3-d67d63e08c91,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3452bbd43620760cce4e2a1495a32f419ab3b149266d3ce160aa6ab87cec19e5,PodSandboxId:46e9bca86
63860e0730fb17765fc08b585bc1a50c328bae05d4170d464169a4f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744152379693784144,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-355098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 244d0583006a608c8b83292f5e9eed4c,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4657ddcd787010d32fe6d335e26451125e6cc1d8191c21df7cf194c1e51af37b,PodSandboxId:cd2925f8296e350630981930ee8cc7a97a42876b9e5a2cea8e49522f
dea1dc61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744152379714535168,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-355098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44e770560cbfed119ad4585fbf99e128,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36c0277b71e19f1e0507194f36a9d5676aa868d8863931aad976a9448f99a024,PodSandboxId:a31600cb6affaca87cfdf95cf4612f87bde25db4a33b4d312d311206661b1610,Metadata
:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744152379616856651,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-355098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef84559a469b6f692f8da6aeb1d77aa9,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:683e8d222b068dcd7730b274a97c5045d8badab841f56aa7fbcac15ab6a0f498,PodSandboxId:9e899153b1e53a1fd73925e587e74771de89ae281db05b4c25f135b4cad375c
6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744152379634276736,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-355098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a936cf37e8c8d267320a8841e6b3785b,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=70cd4930-a9ef-494f-afa4-38c45d1326c1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 22:51:01 addons-355098 crio[657]: time="2025-04-08 22:51:01.383661352Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=5810c107-fcf2-471f-98da-c5d1c5e7446a name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 08 22:51:01 addons-355098 crio[657]: time="2025-04-08 22:51:01.383966469Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d97322667d3522598763a65c8ea8b678770db4d650d6acf30d746df61b89966a,Metadata:&PodSandboxMetadata{Name:hello-world-app-7d9564db4-qgpqr,Uid:4c26bad6-e263-4ce3-a373-a88ac1cae58c,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744152660471617786,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-7d9564db4-qgpqr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c26bad6-e263-4ce3-a373-a88ac1cae58c,pod-template-hash: 7d9564db4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-08T22:51:00.159471548Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:18257259ac056a65afd2b93efd36ba7c4baa674acebe6e3f63ec60c260bcc89f,Metadata:&PodSandboxMetadata{Name:nginx,Uid:1016ffa1-7455-4039-9721-439abd6919c0,Namespace:default,Attempt:0,},St
ate:SANDBOX_READY,CreatedAt:1744152517020804608,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1016ffa1-7455-4039-9721-439abd6919c0,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-08T22:48:36.712430288Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:200ce6c0528523f000f3f7196b09bbd131409ea495e0c7f9cee93a8be81cdff1,Metadata:&PodSandboxMetadata{Name:busybox,Uid:2668e738-92e7-49ab-a3d2-53c61273973a,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744152478178577399,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2668e738-92e7-49ab-a3d2-53c61273973a,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-08T22:47:57.868267575Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:118519149c99aca986934
2e7cfbf3e47f7e8b26d56e3138117a31dd1aa94296a,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-56d7c84fd4-hwnwt,Uid:b3b3c0a9-cc76-4135-ae1d-e50226631b42,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744152461828433963,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-hwnwt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b3b3c0a9-cc76-4135-ae1d-e50226631b42,pod-template-hash: 56d7c84fd4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-08T22:46:37.499202306Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0a1927badfae075c5177c19c37f05d00050198eb47484636357e3b8caa2827a1,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-zcxjn,Uid:dd061eae-8064-44be-b6e3-f089f7d217b5,Namespace:ingress-nginx,Attempt:0,},S
tate:SANDBOX_NOTREADY,CreatedAt:1744152398149621486,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: ce0c246e-8d86-4563-8b0b-3bd993fdc6e5,batch.kubernetes.io/job-name: ingress-nginx-admission-create,controller-uid: ce0c246e-8d86-4563-8b0b-3bd993fdc6e5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-zcxjn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: dd061eae-8064-44be-b6e3-f089f7d217b5,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-08T22:46:37.528991176Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c148492a3a87cc56f5c4ebac10a5bb49d1888eca0360a4f72420c99d4335ad4a,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-zxllf,Uid:51462bf5-2603-4e5c-9906-e8f37a42c965,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,C
reatedAt:1744152397892549880,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: d6076009-2954-408f-a147-07dcab7cfe9a,batch.kubernetes.io/job-name: ingress-nginx-admission-patch,controller-uid: d6076009-2954-408f-a147-07dcab7cfe9a,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-zxllf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51462bf5-2603-4e5c-9906-e8f37a42c965,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-08T22:46:37.570924686Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:157550ff8ef8bb1b8e454c5847744c17747b11c606693f330cd9c49736dee220,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:b9963b95-5780-4a2a-ac16-7d54faa6fa97,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744152395008041169,Labels:map[string]
string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9963b95-5780-4a2a-ac16-7d54faa6fa97,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io
/config.seen: 2025-04-08T22:46:34.396766172Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d8ddc5d089e0e004ce1d99fde59ca8b4b81eb0e121b820ce2dc10506275d4fe6,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:25c27a56-63e6-47c2-b868-3e39febb0bd3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744152394472558708,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25c27a56-63e6-47c2-b868-3e39febb0bd3,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":
\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2025-04-08T22:46:33.866491325Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f3dbfa3e3503a7efbc47aa5edde5fb01691707244f861687eb4da4d5dedb33a6,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-bfpvp,Uid:24c4ff4e-c69c-4ca1-937e-01b646af664a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744152392891751606,Labels:map[string]string{controller-revision-hash: 578b4c597,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-bfpvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c4ff4
e-c69c-4ca1-937e-01b646af664a,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-08T22:46:32.237110648Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:23493d8925536b80d39c600c8c0ea7064d5647e527edd7d0629dd0d70c463e33,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-tmwrn,Uid:4a36a98a-dcbf-4fdc-b8cd-01bf6e34d08f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744152390262636796,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-tmwrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a36a98a-dcbf-4fdc-b8cd-01bf6e34d08f,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-08T22:46:29.953726406Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:beecf985e7fcfc6acac39b1b536df5418f082f0a372ade5fec77346622ea0ef0,Metadata:&PodSandboxM
etadata{Name:kube-proxy-t88l4,Uid:6f63d680-43c5-4442-b7e3-d67d63e08c91,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744152389921733421,Labels:map[string]string{controller-revision-hash: 7bb84c4984,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-t88l4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f63d680-43c5-4442-b7e3-d67d63e08c91,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-08T22:46:29.583359672Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:46e9bca8663860e0730fb17765fc08b585bc1a50c328bae05d4170d464169a4f,Metadata:&PodSandboxMetadata{Name:etcd-addons-355098,Uid:244d0583006a608c8b83292f5e9eed4c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744152379500893521,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-355098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 244d0
583006a608c8b83292f5e9eed4c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.199:2379,kubernetes.io/config.hash: 244d0583006a608c8b83292f5e9eed4c,kubernetes.io/config.seen: 2025-04-08T22:46:19.042481789Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9e899153b1e53a1fd73925e587e74771de89ae281db05b4c25f135b4cad375c6,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-355098,Uid:a936cf37e8c8d267320a8841e6b3785b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744152379499631433,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-355098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a936cf37e8c8d267320a8841e6b3785b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.199:8443,kubernetes.io/config.hash: a936cf37e8c8d267320a8841e6b3785
b,kubernetes.io/config.seen: 2025-04-08T22:46:19.042484991Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cd2925f8296e350630981930ee8cc7a97a42876b9e5a2cea8e49522fdea1dc61,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-355098,Uid:44e770560cbfed119ad4585fbf99e128,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744152379497352392,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-355098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44e770560cbfed119ad4585fbf99e128,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 44e770560cbfed119ad4585fbf99e128,kubernetes.io/config.seen: 2025-04-08T22:46:19.042487139Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a31600cb6affaca87cfdf95cf4612f87bde25db4a33b4d312d311206661b1610,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-355098,Uid:ef84559a469b6f692f8da6aeb1d77aa
9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744152379488094280,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-355098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef84559a469b6f692f8da6aeb1d77aa9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ef84559a469b6f692f8da6aeb1d77aa9,kubernetes.io/config.seen: 2025-04-08T22:46:19.042486165Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=5810c107-fcf2-471f-98da-c5d1c5e7446a name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 08 22:51:01 addons-355098 crio[657]: time="2025-04-08 22:51:01.384800981Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=03e20b60-c011-440b-8991-231a5dc8de6a name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 22:51:01 addons-355098 crio[657]: time="2025-04-08 22:51:01.384861129Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=03e20b60-c011-440b-8991-231a5dc8de6a name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 22:51:01 addons-355098 crio[657]: time="2025-04-08 22:51:01.385121051Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c651a539e21430b7764654d8975d7ab73797e909addb8629767da88aad00dda,PodSandboxId:18257259ac056a65afd2b93efd36ba7c4baa674acebe6e3f63ec60c260bcc89f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744152521071139981,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1016ffa1-7455-4039-9721-439abd6919c0,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6855963a12bd7f9318f63ef0d913582e7ae027c450f182b29acd35b84008774c,PodSandboxId:200ce6c0528523f000f3f7196b09bbd131409ea495e0c7f9cee93a8be81cdff1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744152481916451836,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2668e738-92e7-49ab-a3d2-53c61273973a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3315b29f88d5ce9574c944b9b87d69cd232ba0285315fbd1da60262c2c252a0,PodSandboxId:118519149c99aca9869342e7cfbf3e47f7e8b26d56e3138117a31dd1aa94296a,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744152470216483071,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-hwnwt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b3b3c0a9-cc76-4135-ae1d-e50226631b42,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5713f46d6ff35d56b9313f3d8970204ee78112b268decaddcc0114ebaaad76aa,PodSandboxId:c148492a3a87cc56f5c4ebac10a5bb49d1888eca0360a4f72420c99d4335ad4a,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1744152454095178159,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zxllf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51462bf5-2603-4e5c-9906-e8f37a42c965,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b352991442288dc1d356ca1dc7989fddeb2e53a89bbe03093193d00a88be354d,PodSandboxId:0a1927badfae075c5177c19c37f05d00050198eb47484636357e3b8caa2827a1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744152453690119200,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zcxjn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: dd061eae-8064-44be-b6e3-f089f7d217b5,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b072bfc9cfe231f55439c4f7edde6f2135c37e0a6b9247abe7133d905be482e6,PodSandboxId:f3dbfa3e3503a7efbc47aa5edde5fb01691707244f861687eb4da4d5dedb33a6,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744152421554966254,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-bfpvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c4ff4e-c69c-4ca1-937e-01b646af664a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59fd45c2e74e54699f4a73f7fe30d3c6486915920ac7fe45ad497db579194ac6,PodSandboxId:d8ddc5d089e0e004ce1d99fde59ca8b4b81eb0e121b820ce2dc10506275d4fe6,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744152418839942592,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25c27a56-63e6-47c2-b868-3e39febb0bd3,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75583b1f65ee42dd306335ff818c7e8b730c47e18fc71b0c255044af4dbc753b,PodSandboxId:157550ff8ef8bb1b8e454c5847744c17747b11c606693f330cd9c49736dee220,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744152395851653471,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9963b95-5780-4a2a-ac16-7d54faa6fa97,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc7527da264f870dc99f89f2b62f6f49685ff472fa4948ddd5c7f954e10f6cd2,PodSandboxId:23493d8925536b80d39c600c8c0ea7064d5647e527edd7d0629dd0d70c463e33,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744152393419925499,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-tmwrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a36a98a-dcbf-4fdc-b8cd-01bf6e34d08f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ee69dc5821040ff469e302848d70283ddb55ca2cb3f27b8b4609
a80685a41b3,PodSandboxId:beecf985e7fcfc6acac39b1b536df5418f082f0a372ade5fec77346622ea0ef0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744152390575732772,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t88l4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f63d680-43c5-4442-b7e3-d67d63e08c91,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3452bbd43620760cce4e2a1495a32f419ab3b149266d3ce160aa6ab87cec19e5,PodSandboxId:46e9bca86
63860e0730fb17765fc08b585bc1a50c328bae05d4170d464169a4f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744152379693784144,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-355098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 244d0583006a608c8b83292f5e9eed4c,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4657ddcd787010d32fe6d335e26451125e6cc1d8191c21df7cf194c1e51af37b,PodSandboxId:cd2925f8296e350630981930ee8cc7a97a42876b9e5a2cea8e49522f
dea1dc61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744152379714535168,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-355098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44e770560cbfed119ad4585fbf99e128,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36c0277b71e19f1e0507194f36a9d5676aa868d8863931aad976a9448f99a024,PodSandboxId:a31600cb6affaca87cfdf95cf4612f87bde25db4a33b4d312d311206661b1610,Metadata
:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744152379616856651,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-355098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef84559a469b6f692f8da6aeb1d77aa9,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:683e8d222b068dcd7730b274a97c5045d8badab841f56aa7fbcac15ab6a0f498,PodSandboxId:9e899153b1e53a1fd73925e587e74771de89ae281db05b4c25f135b4cad375c
6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744152379634276736,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-355098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a936cf37e8c8d267320a8841e6b3785b,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=03e20b60-c011-440b-8991-231a5dc8de6a name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 22:51:01 addons-355098 crio[657]: time="2025-04-08 22:51:01.386094464Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: 4c26bad6-e263-4ce3-a373-a88ac1cae58c,},},}" file="otel-collector/interceptors.go:62" id=fea42166-2047-481b-834a-1fd7d639c23d name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 08 22:51:01 addons-355098 crio[657]: time="2025-04-08 22:51:01.386341223Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d97322667d3522598763a65c8ea8b678770db4d650d6acf30d746df61b89966a,Metadata:&PodSandboxMetadata{Name:hello-world-app-7d9564db4-qgpqr,Uid:4c26bad6-e263-4ce3-a373-a88ac1cae58c,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744152660471617786,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-7d9564db4-qgpqr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c26bad6-e263-4ce3-a373-a88ac1cae58c,pod-template-hash: 7d9564db4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-08T22:51:00.159471548Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=fea42166-2047-481b-834a-1fd7d639c23d name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 08 22:51:01 addons-355098 crio[657]: time="2025-04-08 22:51:01.386932111Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:d97322667d3522598763a65c8ea8b678770db4d650d6acf30d746df61b89966a,Verbose:false,}" file="otel-collector/interceptors.go:62" id=8b399760-2ad8-473b-b281-c0966dc7743c name=/runtime.v1.RuntimeService/PodSandboxStatus
	Apr 08 22:51:01 addons-355098 crio[657]: time="2025-04-08 22:51:01.387024963Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:d97322667d3522598763a65c8ea8b678770db4d650d6acf30d746df61b89966a,Metadata:&PodSandboxMetadata{Name:hello-world-app-7d9564db4-qgpqr,Uid:4c26bad6-e263-4ce3-a373-a88ac1cae58c,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744152660471617786,Network:&PodSandboxNetworkStatus{Ip:10.244.0.33,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:nil,},},},Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-7d9564db4-qgpqr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c26bad6-e263-4ce3-a373-a88ac1cae58c,pod-template-hash: 7d9564db4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-08T22:51:00.159471548Z,kubernetes.io/config.source: api
,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=8b399760-2ad8-473b-b281-c0966dc7743c name=/runtime.v1.RuntimeService/PodSandboxStatus
	Apr 08 22:51:01 addons-355098 crio[657]: time="2025-04-08 22:51:01.387412113Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: 4c26bad6-e263-4ce3-a373-a88ac1cae58c,},},}" file="otel-collector/interceptors.go:62" id=06a688ba-3861-42e3-85ad-2e0943bd6034 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 22:51:01 addons-355098 crio[657]: time="2025-04-08 22:51:01.387525761Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=06a688ba-3861-42e3-85ad-2e0943bd6034 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 22:51:01 addons-355098 crio[657]: time="2025-04-08 22:51:01.387564140Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=06a688ba-3861-42e3-85ad-2e0943bd6034 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 22:51:01 addons-355098 crio[657]: time="2025-04-08 22:51:01.410620757Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=82c499d6-f708-42ea-b864-1f21636e3f01 name=/runtime.v1.RuntimeService/Version
	Apr 08 22:51:01 addons-355098 crio[657]: time="2025-04-08 22:51:01.410705296Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=82c499d6-f708-42ea-b864-1f21636e3f01 name=/runtime.v1.RuntimeService/Version
	Apr 08 22:51:01 addons-355098 crio[657]: time="2025-04-08 22:51:01.412037304Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=38bcd848-9086-48e0-b342-ef0674794025 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 22:51:01 addons-355098 crio[657]: time="2025-04-08 22:51:01.413396292Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744152661413368134,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=38bcd848-9086-48e0-b342-ef0674794025 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 22:51:01 addons-355098 crio[657]: time="2025-04-08 22:51:01.414192611Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=2adc56b9-7bf1-48e5-87c4-0f1bbb404555 name=/runtime.v1.RuntimeService/Version
	Apr 08 22:51:01 addons-355098 crio[657]: time="2025-04-08 22:51:01.414255665Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2adc56b9-7bf1-48e5-87c4-0f1bbb404555 name=/runtime.v1.RuntimeService/Version
	Apr 08 22:51:01 addons-355098 crio[657]: time="2025-04-08 22:51:01.414901726Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d0ad8267-3b31-4453-95f9-67dc608320be name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 22:51:01 addons-355098 crio[657]: time="2025-04-08 22:51:01.414967636Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d0ad8267-3b31-4453-95f9-67dc608320be name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 22:51:01 addons-355098 crio[657]: time="2025-04-08 22:51:01.415470952Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c651a539e21430b7764654d8975d7ab73797e909addb8629767da88aad00dda,PodSandboxId:18257259ac056a65afd2b93efd36ba7c4baa674acebe6e3f63ec60c260bcc89f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744152521071139981,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1016ffa1-7455-4039-9721-439abd6919c0,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6855963a12bd7f9318f63ef0d913582e7ae027c450f182b29acd35b84008774c,PodSandboxId:200ce6c0528523f000f3f7196b09bbd131409ea495e0c7f9cee93a8be81cdff1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744152481916451836,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2668e738-92e7-49ab-a3d2-53c61273973a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3315b29f88d5ce9574c944b9b87d69cd232ba0285315fbd1da60262c2c252a0,PodSandboxId:118519149c99aca9869342e7cfbf3e47f7e8b26d56e3138117a31dd1aa94296a,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744152470216483071,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-hwnwt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b3b3c0a9-cc76-4135-ae1d-e50226631b42,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5713f46d6ff35d56b9313f3d8970204ee78112b268decaddcc0114ebaaad76aa,PodSandboxId:c148492a3a87cc56f5c4ebac10a5bb49d1888eca0360a4f72420c99d4335ad4a,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1744152454095178159,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zxllf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51462bf5-2603-4e5c-9906-e8f37a42c965,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b352991442288dc1d356ca1dc7989fddeb2e53a89bbe03093193d00a88be354d,PodSandboxId:0a1927badfae075c5177c19c37f05d00050198eb47484636357e3b8caa2827a1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744152453690119200,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zcxjn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: dd061eae-8064-44be-b6e3-f089f7d217b5,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b072bfc9cfe231f55439c4f7edde6f2135c37e0a6b9247abe7133d905be482e6,PodSandboxId:f3dbfa3e3503a7efbc47aa5edde5fb01691707244f861687eb4da4d5dedb33a6,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744152421554966254,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-bfpvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c4ff4e-c69c-4ca1-937e-01b646af664a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59fd45c2e74e54699f4a73f7fe30d3c6486915920ac7fe45ad497db579194ac6,PodSandboxId:d8ddc5d089e0e004ce1d99fde59ca8b4b81eb0e121b820ce2dc10506275d4fe6,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744152418839942592,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25c27a56-63e6-47c2-b868-3e39febb0bd3,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75583b1f65ee42dd306335ff818c7e8b730c47e18fc71b0c255044af4dbc753b,PodSandboxId:157550ff8ef8bb1b8e454c5847744c17747b11c606693f330cd9c49736dee220,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744152395851653471,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9963b95-5780-4a2a-ac16-7d54faa6fa97,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc7527da264f870dc99f89f2b62f6f49685ff472fa4948ddd5c7f954e10f6cd2,PodSandboxId:23493d8925536b80d39c600c8c0ea7064d5647e527edd7d0629dd0d70c463e33,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744152393419925499,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-tmwrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a36a98a-dcbf-4fdc-b8cd-01bf6e34d08f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ee69dc5821040ff469e302848d70283ddb55ca2cb3f27b8b4609
a80685a41b3,PodSandboxId:beecf985e7fcfc6acac39b1b536df5418f082f0a372ade5fec77346622ea0ef0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744152390575732772,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t88l4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f63d680-43c5-4442-b7e3-d67d63e08c91,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3452bbd43620760cce4e2a1495a32f419ab3b149266d3ce160aa6ab87cec19e5,PodSandboxId:46e9bca86
63860e0730fb17765fc08b585bc1a50c328bae05d4170d464169a4f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744152379693784144,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-355098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 244d0583006a608c8b83292f5e9eed4c,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4657ddcd787010d32fe6d335e26451125e6cc1d8191c21df7cf194c1e51af37b,PodSandboxId:cd2925f8296e350630981930ee8cc7a97a42876b9e5a2cea8e49522f
dea1dc61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744152379714535168,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-355098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44e770560cbfed119ad4585fbf99e128,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36c0277b71e19f1e0507194f36a9d5676aa868d8863931aad976a9448f99a024,PodSandboxId:a31600cb6affaca87cfdf95cf4612f87bde25db4a33b4d312d311206661b1610,Metadata
:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744152379616856651,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-355098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef84559a469b6f692f8da6aeb1d77aa9,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:683e8d222b068dcd7730b274a97c5045d8badab841f56aa7fbcac15ab6a0f498,PodSandboxId:9e899153b1e53a1fd73925e587e74771de89ae281db05b4c25f135b4cad375c
6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744152379634276736,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-355098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a936cf37e8c8d267320a8841e6b3785b,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d0ad8267-3b31-4453-95f9-67dc608320be name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1c651a539e214       docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591                              2 minutes ago       Running             nginx                     0                   18257259ac056       nginx
	6855963a12bd7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   200ce6c052852       busybox
	a3315b29f88d5       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   118519149c99a       ingress-nginx-controller-56d7c84fd4-hwnwt
	5713f46d6ff35       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             3 minutes ago       Exited              patch                     1                   c148492a3a87c       ingress-nginx-admission-patch-zxllf
	b352991442288       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              create                    0                   0a1927badfae0       ingress-nginx-admission-create-zcxjn
	b072bfc9cfe23       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     3 minutes ago       Running             amd-gpu-device-plugin     0                   f3dbfa3e3503a       amd-gpu-device-plugin-bfpvp
	59fd45c2e74e5       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   d8ddc5d089e0e       kube-ingress-dns-minikube
	75583b1f65ee4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   157550ff8ef8b       storage-provisioner
	bc7527da264f8       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   23493d8925536       coredns-668d6bf9bc-tmwrn
	4ee69dc582104       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5                                                             4 minutes ago       Running             kube-proxy                0                   beecf985e7fcf       kube-proxy-t88l4
	4657ddcd78701       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d                                                             4 minutes ago       Running             kube-scheduler            0                   cd2925f8296e3       kube-scheduler-addons-355098
	3452bbd436207       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             4 minutes ago       Running             etcd                      0                   46e9bca866386       etcd-addons-355098
	683e8d222b068       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef                                                             4 minutes ago       Running             kube-apiserver            0                   9e899153b1e53       kube-apiserver-addons-355098
	36c0277b71e19       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389                                                             4 minutes ago       Running             kube-controller-manager   0                   a31600cb6affa       kube-controller-manager-addons-355098
	
	
	==> coredns [bc7527da264f870dc99f89f2b62f6f49685ff472fa4948ddd5c7f954e10f6cd2] <==
	[INFO] 10.244.0.9:55527 - 11980 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000880668s
	[INFO] 10.244.0.9:55527 - 1400 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000133895s
	[INFO] 10.244.0.9:55527 - 7540 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000065012s
	[INFO] 10.244.0.9:55527 - 20013 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000118333s
	[INFO] 10.244.0.9:55527 - 48735 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000123941s
	[INFO] 10.244.0.9:55527 - 52093 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000142s
	[INFO] 10.244.0.9:55527 - 47482 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000085507s
	[INFO] 10.244.0.9:52028 - 31261 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00009737s
	[INFO] 10.244.0.9:52028 - 31535 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00020083s
	[INFO] 10.244.0.9:46732 - 31481 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000060118s
	[INFO] 10.244.0.9:46732 - 31725 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000064305s
	[INFO] 10.244.0.9:48950 - 21026 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000069541s
	[INFO] 10.244.0.9:48950 - 21492 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000082155s
	[INFO] 10.244.0.9:46173 - 46901 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000065032s
	[INFO] 10.244.0.9:46173 - 47340 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000097392s
	[INFO] 10.244.0.23:39576 - 1963 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000417992s
	[INFO] 10.244.0.23:38930 - 16796 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000567159s
	[INFO] 10.244.0.23:49685 - 25518 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000195643s
	[INFO] 10.244.0.23:41769 - 29234 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127349s
	[INFO] 10.244.0.23:43021 - 16209 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000153538s
	[INFO] 10.244.0.23:36928 - 60811 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000147726s
	[INFO] 10.244.0.23:45342 - 43442 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001136225s
	[INFO] 10.244.0.23:45386 - 55834 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001032597s
	[INFO] 10.244.0.27:40767 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000396587s
	[INFO] 10.244.0.27:44084 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000203651s
	
	
	==> describe nodes <==
	Name:               addons-355098
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-355098
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd2f4c3eba2bd452b5997c855e28d0966165ba83
	                    minikube.k8s.io/name=addons-355098
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_08T22_46_25_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-355098
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Apr 2025 22:46:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-355098
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Apr 2025 22:51:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Apr 2025 22:49:29 +0000   Tue, 08 Apr 2025 22:46:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Apr 2025 22:49:29 +0000   Tue, 08 Apr 2025 22:46:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Apr 2025 22:49:29 +0000   Tue, 08 Apr 2025 22:46:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Apr 2025 22:49:29 +0000   Tue, 08 Apr 2025 22:46:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.199
	  Hostname:    addons-355098
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 15596c5bc2f440e89baf9945c8198fd3
	  System UUID:                15596c5b-c2f4-40e8-9baf-9945c8198fd3
	  Boot ID:                    d38a8865-9b58-4952-a260-2a6e8373eefc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m4s
	  default                     hello-world-app-7d9564db4-qgpqr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-hwnwt    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m24s
	  kube-system                 amd-gpu-device-plugin-bfpvp                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 coredns-668d6bf9bc-tmwrn                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m32s
	  kube-system                 etcd-addons-355098                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m37s
	  kube-system                 kube-apiserver-addons-355098                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 kube-controller-manager-addons-355098        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 kube-proxy-t88l4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 kube-scheduler-addons-355098                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m30s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m42s (x8 over 4m42s)  kubelet          Node addons-355098 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m42s (x8 over 4m42s)  kubelet          Node addons-355098 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m42s (x7 over 4m42s)  kubelet          Node addons-355098 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m37s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m37s                  kubelet          Node addons-355098 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m37s                  kubelet          Node addons-355098 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m37s                  kubelet          Node addons-355098 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m35s                  kubelet          Node addons-355098 status is now: NodeReady
	  Normal  RegisteredNode           4m33s                  node-controller  Node addons-355098 event: Registered Node addons-355098 in Controller
	
	
	==> dmesg <==
	[  +5.983935] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +0.083561] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.222672] systemd-fstab-generator[1351]: Ignoring "noauto" option for root device
	[  +0.156066] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.064987] kauditd_printk_skb: 114 callbacks suppressed
	[  +5.149292] kauditd_printk_skb: 175 callbacks suppressed
	[ +10.549509] kauditd_printk_skb: 41 callbacks suppressed
	[Apr 8 22:47] kauditd_printk_skb: 5 callbacks suppressed
	[ +17.204418] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.517706] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.594443] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.151487] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.300827] kauditd_printk_skb: 43 callbacks suppressed
	[ +11.080857] kauditd_printk_skb: 6 callbacks suppressed
	[Apr 8 22:48] kauditd_printk_skb: 7 callbacks suppressed
	[ +16.432654] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.149967] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.315055] kauditd_printk_skb: 52 callbacks suppressed
	[  +5.214797] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.443763] kauditd_printk_skb: 72 callbacks suppressed
	[ +14.672877] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.235341] kauditd_printk_skb: 2 callbacks suppressed
	[Apr 8 22:49] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.890624] kauditd_printk_skb: 7 callbacks suppressed
	[Apr 8 22:50] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [3452bbd43620760cce4e2a1495a32f419ab3b149266d3ce160aa6ab87cec19e5] <==
	{"level":"info","ts":"2025-04-08T22:47:50.070867Z","caller":"traceutil/trace.go:171","msg":"trace[1622796709] transaction","detail":"{read_only:false; response_revision:1122; number_of_response:1; }","duration":"356.824149ms","start":"2025-04-08T22:47:49.714030Z","end":"2025-04-08T22:47:50.070854Z","steps":["trace[1622796709] 'process raft request'  (duration: 355.626454ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-08T22:47:50.071408Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-08T22:47:49.714014Z","time spent":"357.32975ms","remote":"127.0.0.1:49612","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1121 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-04-08T22:47:50.071596Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"259.776307ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-08T22:47:50.071626Z","caller":"traceutil/trace.go:171","msg":"trace[1293361540] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1122; }","duration":"259.826385ms","start":"2025-04-08T22:47:49.811793Z","end":"2025-04-08T22:47:50.071619Z","steps":["trace[1293361540] 'agreement among raft nodes before linearized reading'  (duration: 259.782389ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-08T22:47:50.070927Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"339.65824ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-08T22:47:50.071732Z","caller":"traceutil/trace.go:171","msg":"trace[1748527542] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1122; }","duration":"340.495183ms","start":"2025-04-08T22:47:49.731231Z","end":"2025-04-08T22:47:50.071727Z","steps":["trace[1748527542] 'agreement among raft nodes before linearized reading'  (duration: 339.661469ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-08T22:47:50.071751Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-08T22:47:49.731172Z","time spent":"340.57288ms","remote":"127.0.0.1:49634","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-04-08T22:47:50.071836Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.812292ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-08T22:47:50.071870Z","caller":"traceutil/trace.go:171","msg":"trace[1410185636] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1122; }","duration":"186.8387ms","start":"2025-04-08T22:47:49.885019Z","end":"2025-04-08T22:47:50.071857Z","steps":["trace[1410185636] 'agreement among raft nodes before linearized reading'  (duration: 186.80527ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-08T22:47:56.218342Z","caller":"traceutil/trace.go:171","msg":"trace[1718214353] transaction","detail":"{read_only:false; response_revision:1162; number_of_response:1; }","duration":"110.482312ms","start":"2025-04-08T22:47:56.107805Z","end":"2025-04-08T22:47:56.218288Z","steps":["trace[1718214353] 'process raft request'  (duration: 110.399355ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-08T22:48:23.401509Z","caller":"traceutil/trace.go:171","msg":"trace[1924817089] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1319; }","duration":"152.504429ms","start":"2025-04-08T22:48:23.248990Z","end":"2025-04-08T22:48:23.401494Z","steps":["trace[1924817089] 'process raft request'  (duration: 152.373423ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-08T22:48:59.002500Z","caller":"traceutil/trace.go:171","msg":"trace[37735073] transaction","detail":"{read_only:false; response_revision:1641; number_of_response:1; }","duration":"434.11786ms","start":"2025-04-08T22:48:58.568362Z","end":"2025-04-08T22:48:59.002480Z","steps":["trace[37735073] 'process raft request'  (duration: 434.016341ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-08T22:48:59.002742Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-08T22:48:58.568347Z","time spent":"434.321561ms","remote":"127.0.0.1:49612","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1637 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-04-08T22:48:59.014804Z","caller":"traceutil/trace.go:171","msg":"trace[199153168] transaction","detail":"{read_only:false; response_revision:1642; number_of_response:1; }","duration":"357.252758ms","start":"2025-04-08T22:48:58.657479Z","end":"2025-04-08T22:48:59.014732Z","steps":["trace[199153168] 'process raft request'  (duration: 356.744266ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-08T22:48:59.015158Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-08T22:48:58.657461Z","time spent":"357.597376ms","remote":"127.0.0.1:49622","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":12185,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/minions/addons-355098\" mod_revision:1389 > success:<request_put:<key:\"/registry/minions/addons-355098\" value_size:12146 >> failure:<request_range:<key:\"/registry/minions/addons-355098\" > >"}
	{"level":"info","ts":"2025-04-08T22:48:59.021649Z","caller":"traceutil/trace.go:171","msg":"trace[90703733] linearizableReadLoop","detail":"{readStateIndex:1695; appliedIndex:1693; }","duration":"307.076852ms","start":"2025-04-08T22:48:58.714556Z","end":"2025-04-08T22:48:59.021633Z","steps":["trace[90703733] 'read index received'  (duration: 287.783533ms)","trace[90703733] 'applied index is now lower than readState.Index'  (duration: 19.292696ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-08T22:48:59.021986Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"307.407123ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumes/pvc-fb751989-87e0-4024-b7d5-3cb6b29c4ba8\" limit:1 ","response":"range_response_count:1 size:1262"}
	{"level":"info","ts":"2025-04-08T22:48:59.022134Z","caller":"traceutil/trace.go:171","msg":"trace[424444627] range","detail":"{range_begin:/registry/persistentvolumes/pvc-fb751989-87e0-4024-b7d5-3cb6b29c4ba8; range_end:; response_count:1; response_revision:1642; }","duration":"307.606023ms","start":"2025-04-08T22:48:58.714518Z","end":"2025-04-08T22:48:59.022124Z","steps":["trace[424444627] 'agreement among raft nodes before linearized reading'  (duration: 307.368167ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-08T22:48:59.022175Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-08T22:48:58.714503Z","time spent":"307.661663ms","remote":"127.0.0.1:49598","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":1,"response size":1286,"request content":"key:\"/registry/persistentvolumes/pvc-fb751989-87e0-4024-b7d5-3cb6b29c4ba8\" limit:1 "}
	{"level":"warn","ts":"2025-04-08T22:48:59.023575Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.129236ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-08T22:48:59.024521Z","caller":"traceutil/trace.go:171","msg":"trace[175776190] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1642; }","duration":"182.108998ms","start":"2025-04-08T22:48:58.842404Z","end":"2025-04-08T22:48:59.024513Z","steps":["trace[175776190] 'agreement among raft nodes before linearized reading'  (duration: 181.133032ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-08T22:48:59.024489Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.512185ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-08T22:48:59.025445Z","caller":"traceutil/trace.go:171","msg":"trace[752463131] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1642; }","duration":"141.468368ms","start":"2025-04-08T22:48:58.883968Z","end":"2025-04-08T22:48:59.025436Z","steps":["trace[752463131] 'agreement among raft nodes before linearized reading'  (duration: 140.502963ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-08T22:49:31.246760Z","caller":"traceutil/trace.go:171","msg":"trace[2077732893] transaction","detail":"{read_only:false; response_revision:1742; number_of_response:1; }","duration":"102.311175ms","start":"2025-04-08T22:49:31.144428Z","end":"2025-04-08T22:49:31.246739Z","steps":["trace[2077732893] 'process raft request'  (duration: 102.01671ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-08T22:49:34.772678Z","caller":"traceutil/trace.go:171","msg":"trace[2121090920] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1789; }","duration":"148.268216ms","start":"2025-04-08T22:49:34.624396Z","end":"2025-04-08T22:49:34.772665Z","steps":["trace[2121090920] 'process raft request'  (duration: 148.187434ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:51:01 up 5 min,  0 users,  load average: 0.18, 0.63, 0.34
	Linux addons-355098 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [683e8d222b068dcd7730b274a97c5045d8badab841f56aa7fbcac15ab6a0f498] <==
	I0408 22:47:04.868447       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0408 22:48:09.037521       1 conn.go:339] Error on socket receive: read tcp 192.168.39.199:8443->192.168.39.1:42622: use of closed network connection
	E0408 22:48:09.204378       1 conn.go:339] Error on socket receive: read tcp 192.168.39.199:8443->192.168.39.1:42650: use of closed network connection
	I0408 22:48:18.350014       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.152.105"}
	I0408 22:48:36.223194       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0408 22:48:36.542168       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0408 22:48:36.760924       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.75.90"}
	W0408 22:48:37.368700       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0408 22:48:52.548890       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0408 22:49:05.813771       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0408 22:49:06.905768       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0408 22:49:33.722087       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0408 22:49:33.722156       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0408 22:49:33.753263       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0408 22:49:33.753556       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0408 22:49:33.776140       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0408 22:49:33.776197       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0408 22:49:33.778962       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0408 22:49:33.779033       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0408 22:49:33.900193       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0408 22:49:33.900236       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0408 22:49:34.780776       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0408 22:49:34.901811       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0408 22:49:34.910511       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0408 22:51:00.356287       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.153.61"}
	
	
	==> kube-controller-manager [36c0277b71e19f1e0507194f36a9d5676aa868d8863931aad976a9448f99a024] <==
	E0408 22:50:09.070372       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0408 22:50:12.955151       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0408 22:50:12.959988       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0408 22:50:12.963904       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0408 22:50:12.963964       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0408 22:50:37.767209       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0408 22:50:37.768124       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0408 22:50:37.768966       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0408 22:50:37.769001       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0408 22:50:42.935654       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0408 22:50:42.936428       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0408 22:50:42.937143       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0408 22:50:42.937167       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0408 22:50:49.665496       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0408 22:50:49.666542       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0408 22:50:49.667498       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0408 22:50:49.667565       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0408 22:50:55.035492       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0408 22:50:55.036501       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0408 22:50:55.037407       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0408 22:50:55.037482       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0408 22:51:00.162867       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="31.794156ms"
	I0408 22:51:00.173617       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="10.609494ms"
	I0408 22:51:00.173694       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="33.283µs"
	I0408 22:51:00.187441       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="33.647µs"
	
	
	==> kube-proxy [4ee69dc5821040ff469e302848d70283ddb55ca2cb3f27b8b4609a80685a41b3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0408 22:46:31.279372       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0408 22:46:31.310087       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.199"]
	E0408 22:46:31.310153       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0408 22:46:31.389222       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0408 22:46:31.389265       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0408 22:46:31.389288       1 server_linux.go:170] "Using iptables Proxier"
	I0408 22:46:31.394875       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0408 22:46:31.395147       1 server.go:497] "Version info" version="v1.32.2"
	I0408 22:46:31.395159       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 22:46:31.398285       1 config.go:199] "Starting service config controller"
	I0408 22:46:31.398378       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0408 22:46:31.398407       1 config.go:105] "Starting endpoint slice config controller"
	I0408 22:46:31.398412       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0408 22:46:31.398980       1 config.go:329] "Starting node config controller"
	I0408 22:46:31.398988       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0408 22:46:31.499379       1 shared_informer.go:320] Caches are synced for node config
	I0408 22:46:31.499406       1 shared_informer.go:320] Caches are synced for service config
	I0408 22:46:31.499415       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4657ddcd787010d32fe6d335e26451125e6cc1d8191c21df7cf194c1e51af37b] <==
	W0408 22:46:22.188694       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0408 22:46:22.188718       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0408 22:46:22.188746       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0408 22:46:22.188770       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0408 22:46:23.010986       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0408 22:46:23.011046       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0408 22:46:23.024603       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0408 22:46:23.024673       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0408 22:46:23.092110       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0408 22:46:23.092162       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0408 22:46:23.154485       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0408 22:46:23.154540       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0408 22:46:23.260080       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0408 22:46:23.260135       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0408 22:46:23.276120       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0408 22:46:23.276172       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0408 22:46:23.297949       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0408 22:46:23.297999       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0408 22:46:23.332230       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0408 22:46:23.332410       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0408 22:46:23.348098       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0408 22:46:23.348270       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0408 22:46:23.385514       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0408 22:46:23.386446       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0408 22:46:23.671420       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 08 22:50:24 addons-355098 kubelet[1234]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 22:50:24 addons-355098 kubelet[1234]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 22:50:24 addons-355098 kubelet[1234]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 22:50:25 addons-355098 kubelet[1234]: E0408 22:50:25.112234    1234 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744152625111852273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 08 22:50:25 addons-355098 kubelet[1234]: E0408 22:50:25.112262    1234 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744152625111852273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 08 22:50:27 addons-355098 kubelet[1234]: I0408 22:50:27.982014    1234 scope.go:117] "RemoveContainer" containerID="cb5819da33bb5f9712ec120adf8cec6c4fcbf63aebeaf540760dd281ca4fc3ef"
	Apr 08 22:50:35 addons-355098 kubelet[1234]: E0408 22:50:35.114494    1234 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744152635113994246,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 08 22:50:35 addons-355098 kubelet[1234]: E0408 22:50:35.114520    1234 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744152635113994246,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 08 22:50:45 addons-355098 kubelet[1234]: E0408 22:50:45.116695    1234 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744152645116418175,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 08 22:50:45 addons-355098 kubelet[1234]: E0408 22:50:45.116754    1234 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744152645116418175,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 08 22:50:53 addons-355098 kubelet[1234]: I0408 22:50:53.825073    1234 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-bfpvp" secret="" err="secret \"gcp-auth\" not found"
	Apr 08 22:50:55 addons-355098 kubelet[1234]: E0408 22:50:55.119202    1234 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744152655118883370,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 08 22:50:55 addons-355098 kubelet[1234]: E0408 22:50:55.119248    1234 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744152655118883370,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 08 22:51:00 addons-355098 kubelet[1234]: I0408 22:51:00.160207    1234 memory_manager.go:355] "RemoveStaleState removing state" podUID="c2de5602-2ab4-40b9-9b5a-ed46145c609a" containerName="volume-snapshot-controller"
	Apr 08 22:51:00 addons-355098 kubelet[1234]: I0408 22:51:00.160275    1234 memory_manager.go:355] "RemoveStaleState removing state" podUID="6b516a7e-d94a-431b-b7c3-b558c409e891" containerName="volume-snapshot-controller"
	Apr 08 22:51:00 addons-355098 kubelet[1234]: I0408 22:51:00.160283    1234 memory_manager.go:355] "RemoveStaleState removing state" podUID="2d612896-e22e-4c45-a4e1-604fb6e6b85b" containerName="hostpath"
	Apr 08 22:51:00 addons-355098 kubelet[1234]: I0408 22:51:00.160290    1234 memory_manager.go:355] "RemoveStaleState removing state" podUID="2d612896-e22e-4c45-a4e1-604fb6e6b85b" containerName="csi-provisioner"
	Apr 08 22:51:00 addons-355098 kubelet[1234]: I0408 22:51:00.160355    1234 memory_manager.go:355] "RemoveStaleState removing state" podUID="2d612896-e22e-4c45-a4e1-604fb6e6b85b" containerName="node-driver-registrar"
	Apr 08 22:51:00 addons-355098 kubelet[1234]: I0408 22:51:00.160362    1234 memory_manager.go:355] "RemoveStaleState removing state" podUID="2d612896-e22e-4c45-a4e1-604fb6e6b85b" containerName="csi-external-health-monitor-controller"
	Apr 08 22:51:00 addons-355098 kubelet[1234]: I0408 22:51:00.160369    1234 memory_manager.go:355] "RemoveStaleState removing state" podUID="d77f9f5f-452a-4be9-94f3-e2345d7b4f24" containerName="csi-resizer"
	Apr 08 22:51:00 addons-355098 kubelet[1234]: I0408 22:51:00.160374    1234 memory_manager.go:355] "RemoveStaleState removing state" podUID="2d612896-e22e-4c45-a4e1-604fb6e6b85b" containerName="liveness-probe"
	Apr 08 22:51:00 addons-355098 kubelet[1234]: I0408 22:51:00.160379    1234 memory_manager.go:355] "RemoveStaleState removing state" podUID="cdc61e13-f1a1-452b-a6bc-bf8209d5dad5" containerName="csi-attacher"
	Apr 08 22:51:00 addons-355098 kubelet[1234]: I0408 22:51:00.160383    1234 memory_manager.go:355] "RemoveStaleState removing state" podUID="2d612896-e22e-4c45-a4e1-604fb6e6b85b" containerName="csi-snapshotter"
	Apr 08 22:51:00 addons-355098 kubelet[1234]: I0408 22:51:00.160388    1234 memory_manager.go:355] "RemoveStaleState removing state" podUID="8df1f950-62f5-4a3a-ac01-bb003c91d28a" containerName="task-pv-container"
	Apr 08 22:51:00 addons-355098 kubelet[1234]: I0408 22:51:00.290570    1234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbzp5\" (UniqueName: \"kubernetes.io/projected/4c26bad6-e263-4ce3-a373-a88ac1cae58c-kube-api-access-nbzp5\") pod \"hello-world-app-7d9564db4-qgpqr\" (UID: \"4c26bad6-e263-4ce3-a373-a88ac1cae58c\") " pod="default/hello-world-app-7d9564db4-qgpqr"
	
	
	==> storage-provisioner [75583b1f65ee42dd306335ff818c7e8b730c47e18fc71b0c255044af4dbc753b] <==
	I0408 22:46:36.814027       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0408 22:46:36.948558       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0408 22:46:36.948618       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0408 22:46:37.044788       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0408 22:46:37.045021       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-355098_65a08aa8-de1e-45b9-b887-e5d90f484564!
	I0408 22:46:37.046700       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"15a3cf4d-8414-4102-bb31-26b047b829a4", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-355098_65a08aa8-de1e-45b9-b887-e5d90f484564 became leader
	I0408 22:46:37.523880       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-355098_65a08aa8-de1e-45b9-b887-e5d90f484564!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-355098 -n addons-355098
helpers_test.go:261: (dbg) Run:  kubectl --context addons-355098 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-qgpqr ingress-nginx-admission-create-zcxjn ingress-nginx-admission-patch-zxllf
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-355098 describe pod hello-world-app-7d9564db4-qgpqr ingress-nginx-admission-create-zcxjn ingress-nginx-admission-patch-zxllf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-355098 describe pod hello-world-app-7d9564db4-qgpqr ingress-nginx-admission-create-zcxjn ingress-nginx-admission-patch-zxllf: exit status 1 (65.799612ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-qgpqr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-355098/192.168.39.199
	Start Time:       Tue, 08 Apr 2025 22:51:00 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nbzp5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nbzp5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-qgpqr to addons-355098
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-zcxjn" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-zxllf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-355098 describe pod hello-world-app-7d9564db4-qgpqr ingress-nginx-admission-create-zcxjn ingress-nginx-admission-patch-zxllf: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-355098 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-355098 addons disable ingress-dns --alsologtostderr -v=1: (1.00639276s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-355098 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-355098 addons disable ingress --alsologtostderr -v=1: (7.716314926s)
--- FAIL: TestAddons/parallel/Ingress (154.94s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (1169.7s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0408 22:54:53.711359   16314 config.go:182] Loaded profile config "functional-546336": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-546336 --alsologtostderr -v=8
E0408 22:55:41.803249   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
E0408 22:57:57.931754   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
E0408 22:58:25.646555   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
E0408 23:02:57.940815   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
E0408 23:07:57.940067   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:676: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-546336 --alsologtostderr -v=8: exit status 109 (13m52.89565785s)

                                                
                                                
-- stdout --
	* [functional-546336] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20501
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20501-9125/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20501-9125/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "functional-546336" primary control-plane node in "functional-546336" cluster
	* Updating the running kvm2 "functional-546336" VM ...
	* Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 22:54:53.750429   21772 out.go:345] Setting OutFile to fd 1 ...
	I0408 22:54:53.750673   21772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 22:54:53.750778   21772 out.go:358] Setting ErrFile to fd 2...
	I0408 22:54:53.750790   21772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 22:54:53.751041   21772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20501-9125/.minikube/bin
	I0408 22:54:53.751600   21772 out.go:352] Setting JSON to false
	I0408 22:54:53.752542   21772 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2239,"bootTime":1744150655,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 22:54:53.752628   21772 start.go:139] virtualization: kvm guest
	I0408 22:54:53.754529   21772 out.go:177] * [functional-546336] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 22:54:53.755700   21772 out.go:177]   - MINIKUBE_LOCATION=20501
	I0408 22:54:53.755700   21772 notify.go:220] Checking for updates...
	I0408 22:54:53.757645   21772 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 22:54:53.758881   21772 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20501-9125/kubeconfig
	I0408 22:54:53.760110   21772 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20501-9125/.minikube
	I0408 22:54:53.761221   21772 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 22:54:53.762262   21772 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 22:54:53.764007   21772 config.go:182] Loaded profile config "functional-546336": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 22:54:53.764090   21772 driver.go:404] Setting default libvirt URI to qemu:///system
	I0408 22:54:53.764531   21772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:54:53.764591   21772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:54:53.780528   21772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37317
	I0408 22:54:53.780962   21772 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:54:53.781388   21772 main.go:141] libmachine: Using API Version  1
	I0408 22:54:53.781409   21772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:54:53.781752   21772 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:54:53.781914   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:54:53.819375   21772 out.go:177] * Using the kvm2 driver based on existing profile
	I0408 22:54:53.820528   21772 start.go:297] selected driver: kvm2
	I0408 22:54:53.820538   21772 start.go:901] validating driver "kvm2" against &{Name:functional-546336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-546336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 22:54:53.820619   21772 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 22:54:53.820910   21772 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 22:54:53.820988   21772 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20501-9125/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 22:54:53.835403   21772 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0408 22:54:53.836289   21772 cni.go:84] Creating CNI manager for ""
	I0408 22:54:53.836343   21772 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 22:54:53.836403   21772 start.go:340] cluster config:
	{Name:functional-546336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-546336 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 22:54:53.836507   21772 iso.go:125] acquiring lock: {Name:mk618477bad490b102618c53c9c8c6b34f33ce81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 22:54:53.838584   21772 out.go:177] * Starting "functional-546336" primary control-plane node in "functional-546336" cluster
	I0408 22:54:53.839517   21772 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0408 22:54:53.839549   21772 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20501-9125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0408 22:54:53.839557   21772 cache.go:56] Caching tarball of preloaded images
	I0408 22:54:53.839620   21772 preload.go:172] Found /home/jenkins/minikube-integration/20501-9125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 22:54:53.839629   21772 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0408 22:54:53.839708   21772 profile.go:143] Saving config to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/config.json ...
	I0408 22:54:53.839890   21772 start.go:360] acquireMachinesLock for functional-546336: {Name:mke7be7b51cfddf557a39ecf6493fff6a1168ec9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 22:54:53.839934   21772 start.go:364] duration metric: took 24.616µs to acquireMachinesLock for "functional-546336"
	I0408 22:54:53.839951   21772 start.go:96] Skipping create...Using existing machine configuration
	I0408 22:54:53.839957   21772 fix.go:54] fixHost starting: 
	I0408 22:54:53.840198   21772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:54:53.840227   21772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:54:53.853842   21772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36351
	I0408 22:54:53.854248   21772 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:54:53.854642   21772 main.go:141] libmachine: Using API Version  1
	I0408 22:54:53.854660   21772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:54:53.854972   21772 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:54:53.855161   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:54:53.855314   21772 main.go:141] libmachine: (functional-546336) Calling .GetState
	I0408 22:54:53.856978   21772 fix.go:112] recreateIfNeeded on functional-546336: state=Running err=<nil>
	W0408 22:54:53.856995   21772 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 22:54:53.858448   21772 out.go:177] * Updating the running kvm2 "functional-546336" VM ...
	I0408 22:54:53.859370   21772 machine.go:93] provisionDockerMachine start ...
	I0408 22:54:53.859389   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:54:53.859573   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:54:53.861808   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:53.862195   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:53.862223   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:53.862331   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:54:53.862495   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:53.862642   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:53.862769   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:54:53.862913   21772 main.go:141] libmachine: Using SSH client type: native
	I0408 22:54:53.863111   21772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0408 22:54:53.863123   21772 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 22:54:53.975743   21772 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-546336
	
	I0408 22:54:53.975774   21772 main.go:141] libmachine: (functional-546336) Calling .GetMachineName
	I0408 22:54:53.976060   21772 buildroot.go:166] provisioning hostname "functional-546336"
	I0408 22:54:53.976090   21772 main.go:141] libmachine: (functional-546336) Calling .GetMachineName
	I0408 22:54:53.976275   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:54:53.978794   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:53.979136   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:53.979155   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:53.979343   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:54:53.979538   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:53.979686   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:53.979818   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:54:53.979975   21772 main.go:141] libmachine: Using SSH client type: native
	I0408 22:54:53.980186   21772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0408 22:54:53.980207   21772 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-546336 && echo "functional-546336" | sudo tee /etc/hostname
	I0408 22:54:54.107226   21772 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-546336
	
	I0408 22:54:54.107256   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:54:54.110121   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.110402   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:54.110442   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.110575   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:54:54.110737   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:54.110870   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:54.110984   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:54:54.111111   21772 main.go:141] libmachine: Using SSH client type: native
	I0408 22:54:54.111332   21772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0408 22:54:54.111355   21772 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-546336' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-546336/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-546336' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 22:54:54.224292   21772 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 22:54:54.224321   21772 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20501-9125/.minikube CaCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20501-9125/.minikube}
	I0408 22:54:54.224341   21772 buildroot.go:174] setting up certificates
	I0408 22:54:54.224352   21772 provision.go:84] configureAuth start
	I0408 22:54:54.224363   21772 main.go:141] libmachine: (functional-546336) Calling .GetMachineName
	I0408 22:54:54.224632   21772 main.go:141] libmachine: (functional-546336) Calling .GetIP
	I0408 22:54:54.227055   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.227343   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:54.227372   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.227496   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:54:54.229707   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.230025   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:54.230063   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.230204   21772 provision.go:143] copyHostCerts
	I0408 22:54:54.230228   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem
	I0408 22:54:54.230253   21772 exec_runner.go:144] found /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem, removing ...
	I0408 22:54:54.230267   21772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem
	I0408 22:54:54.230331   21772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem (1082 bytes)
	I0408 22:54:54.230397   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem
	I0408 22:54:54.230414   21772 exec_runner.go:144] found /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem, removing ...
	I0408 22:54:54.230421   21772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem
	I0408 22:54:54.230442   21772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem (1123 bytes)
	I0408 22:54:54.230555   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem
	I0408 22:54:54.230580   21772 exec_runner.go:144] found /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem, removing ...
	I0408 22:54:54.230584   21772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem
	I0408 22:54:54.230614   21772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem (1675 bytes)
	I0408 22:54:54.230663   21772 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem org=jenkins.functional-546336 san=[127.0.0.1 192.168.39.234 functional-546336 localhost minikube]
	I0408 22:54:54.377433   21772 provision.go:177] copyRemoteCerts
	I0408 22:54:54.377494   21772 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 22:54:54.377516   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:54:54.379910   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.380186   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:54.380208   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.380353   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:54:54.380512   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:54.380651   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:54:54.380759   21772 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/functional-546336/id_rsa Username:docker}
	I0408 22:54:54.469346   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0408 22:54:54.469406   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 22:54:54.492119   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0408 22:54:54.492170   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0408 22:54:54.515795   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0408 22:54:54.515854   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0408 22:54:54.538157   21772 provision.go:87] duration metric: took 313.794377ms to configureAuth
	I0408 22:54:54.538179   21772 buildroot.go:189] setting minikube options for container-runtime
	I0408 22:54:54.538348   21772 config.go:182] Loaded profile config "functional-546336": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 22:54:54.538415   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:54:54.540893   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.541189   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:54.541211   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.541388   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:54:54.541569   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:54.541794   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:54.541956   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:54:54.542154   21772 main.go:141] libmachine: Using SSH client type: native
	I0408 22:54:54.542410   21772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0408 22:54:54.542429   21772 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 22:55:00.049143   21772 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 22:55:00.049177   21772 machine.go:96] duration metric: took 6.189793928s to provisionDockerMachine
	I0408 22:55:00.049193   21772 start.go:293] postStartSetup for "functional-546336" (driver="kvm2")
	I0408 22:55:00.049216   21772 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 22:55:00.049238   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:55:00.049527   21772 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 22:55:00.049554   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:55:00.052053   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.052329   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:55:00.052357   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.052449   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:55:00.052621   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:55:00.052774   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:55:00.052915   21772 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/functional-546336/id_rsa Username:docker}
	I0408 22:55:00.137252   21772 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 22:55:00.140999   21772 command_runner.go:130] > NAME=Buildroot
	I0408 22:55:00.141018   21772 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0408 22:55:00.141022   21772 command_runner.go:130] > ID=buildroot
	I0408 22:55:00.141034   21772 command_runner.go:130] > VERSION_ID=2023.02.9
	I0408 22:55:00.141041   21772 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0408 22:55:00.141078   21772 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 22:55:00.141091   21772 filesync.go:126] Scanning /home/jenkins/minikube-integration/20501-9125/.minikube/addons for local assets ...
	I0408 22:55:00.141153   21772 filesync.go:126] Scanning /home/jenkins/minikube-integration/20501-9125/.minikube/files for local assets ...
	I0408 22:55:00.141241   21772 filesync.go:149] local asset: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem -> 163142.pem in /etc/ssl/certs
	I0408 22:55:00.141253   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem -> /etc/ssl/certs/163142.pem
	I0408 22:55:00.141327   21772 filesync.go:149] local asset: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/test/nested/copy/16314/hosts -> hosts in /etc/test/nested/copy/16314
	I0408 22:55:00.141336   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/test/nested/copy/16314/hosts -> /etc/test/nested/copy/16314/hosts
	I0408 22:55:00.141386   21772 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/16314
	I0408 22:55:00.149913   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem --> /etc/ssl/certs/163142.pem (1708 bytes)
	I0408 22:55:00.172587   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/test/nested/copy/16314/hosts --> /etc/test/nested/copy/16314/hosts (40 bytes)
	I0408 22:55:00.194320   21772 start.go:296] duration metric: took 145.104306ms for postStartSetup
	I0408 22:55:00.194353   21772 fix.go:56] duration metric: took 6.354395244s for fixHost
	I0408 22:55:00.194371   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:55:00.197105   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.197468   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:55:00.197508   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.197619   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:55:00.197806   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:55:00.197977   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:55:00.198135   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:55:00.198315   21772 main.go:141] libmachine: Using SSH client type: native
	I0408 22:55:00.198518   21772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0408 22:55:00.198529   21772 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 22:55:00.312401   21772 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744152900.293880637
	
	I0408 22:55:00.312424   21772 fix.go:216] guest clock: 1744152900.293880637
	I0408 22:55:00.312432   21772 fix.go:229] Guest: 2025-04-08 22:55:00.293880637 +0000 UTC Remote: 2025-04-08 22:55:00.194356923 +0000 UTC m=+6.478226412 (delta=99.523714ms)
	I0408 22:55:00.312463   21772 fix.go:200] guest clock delta is within tolerance: 99.523714ms
	I0408 22:55:00.312469   21772 start.go:83] releasing machines lock for "functional-546336", held for 6.472524067s
	I0408 22:55:00.312490   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:55:00.312723   21772 main.go:141] libmachine: (functional-546336) Calling .GetIP
	I0408 22:55:00.315235   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.315592   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:55:00.315620   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.315756   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:55:00.316286   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:55:00.316432   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:55:00.316535   21772 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 22:55:00.316574   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:55:00.316683   21772 ssh_runner.go:195] Run: cat /version.json
	I0408 22:55:00.316708   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:55:00.319048   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.319325   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.319354   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:55:00.319371   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.319522   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:55:00.319696   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:55:00.319776   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:55:00.319817   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.319891   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:55:00.319984   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:55:00.320037   21772 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/functional-546336/id_rsa Username:docker}
	I0408 22:55:00.320121   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:55:00.320259   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:55:00.320368   21772 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/functional-546336/id_rsa Username:docker}
	I0408 22:55:00.470604   21772 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0408 22:55:00.470683   21772 command_runner.go:130] > {"iso_version": "v1.35.0", "kicbase_version": "v0.0.45-1736763277-20236", "minikube_version": "v1.35.0", "commit": "3fb24bd87c8c8761e2515e1a9ee13835a389ed68"}
	I0408 22:55:00.470819   21772 ssh_runner.go:195] Run: systemctl --version
	I0408 22:55:00.499552   21772 command_runner.go:130] > systemd 252 (252)
	I0408 22:55:00.499604   21772 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0408 22:55:00.500041   21772 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 22:55:00.827340   21772 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0408 22:55:00.834963   21772 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0408 22:55:00.835008   21772 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 22:55:00.835072   21772 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 22:55:00.877281   21772 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0408 22:55:00.877304   21772 start.go:495] detecting cgroup driver to use...
	I0408 22:55:00.877378   21772 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 22:55:00.940318   21772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 22:55:01.008191   21772 docker.go:217] disabling cri-docker service (if available) ...
	I0408 22:55:01.008253   21772 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 22:55:01.030120   21772 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 22:55:01.062576   21772 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 22:55:01.269983   21772 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 22:55:01.496425   21772 docker.go:233] disabling docker service ...
	I0408 22:55:01.496502   21772 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 22:55:01.519064   21772 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 22:55:01.540326   21772 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 22:55:01.741595   21772 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 22:55:01.913173   21772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 22:55:01.927297   21772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 22:55:01.950625   21772 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0408 22:55:01.951000   21772 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0408 22:55:01.951058   21772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:01.962726   21772 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 22:55:01.962790   21772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:01.974651   21772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:01.985351   21772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:01.996381   21772 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 22:55:02.012061   21772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:02.024694   21772 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:02.036195   21772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:02.045483   21772 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 22:55:02.053886   21772 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0408 22:55:02.053960   21772 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 22:55:02.066815   21772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 22:55:02.213651   21772 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 22:56:32.679193   21772 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.465496752s)
	I0408 22:56:32.679231   21772 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 22:56:32.679281   21772 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 22:56:32.684914   21772 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0408 22:56:32.684956   21772 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0408 22:56:32.684981   21772 command_runner.go:130] > Device: 0,22	Inode: 1501        Links: 1
	I0408 22:56:32.684990   21772 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0408 22:56:32.684996   21772 command_runner.go:130] > Access: 2025-04-08 22:56:32.503580497 +0000
	I0408 22:56:32.685001   21772 command_runner.go:130] > Modify: 2025-04-08 22:56:32.503580497 +0000
	I0408 22:56:32.685010   21772 command_runner.go:130] > Change: 2025-04-08 22:56:32.503580497 +0000
	I0408 22:56:32.685013   21772 command_runner.go:130] >  Birth: -
	I0408 22:56:32.685205   21772 start.go:563] Will wait 60s for crictl version
	I0408 22:56:32.685262   21772 ssh_runner.go:195] Run: which crictl
	I0408 22:56:32.688828   21772 command_runner.go:130] > /usr/bin/crictl
	I0408 22:56:32.688893   21772 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 22:56:32.724970   21772 command_runner.go:130] > Version:  0.1.0
	I0408 22:56:32.724989   21772 command_runner.go:130] > RuntimeName:  cri-o
	I0408 22:56:32.724994   21772 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0408 22:56:32.724998   21772 command_runner.go:130] > RuntimeApiVersion:  v1
	I0408 22:56:32.725893   21772 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 22:56:32.725977   21772 ssh_runner.go:195] Run: crio --version
	I0408 22:56:32.752723   21772 command_runner.go:130] > crio version 1.29.1
	I0408 22:56:32.752740   21772 command_runner.go:130] > Version:        1.29.1
	I0408 22:56:32.752746   21772 command_runner.go:130] > GitCommit:      unknown
	I0408 22:56:32.752750   21772 command_runner.go:130] > GitCommitDate:  unknown
	I0408 22:56:32.752754   21772 command_runner.go:130] > GitTreeState:   clean
	I0408 22:56:32.752759   21772 command_runner.go:130] > BuildDate:      2025-01-14T08:57:58Z
	I0408 22:56:32.752763   21772 command_runner.go:130] > GoVersion:      go1.21.6
	I0408 22:56:32.752767   21772 command_runner.go:130] > Compiler:       gc
	I0408 22:56:32.752771   21772 command_runner.go:130] > Platform:       linux/amd64
	I0408 22:56:32.752775   21772 command_runner.go:130] > Linkmode:       dynamic
	I0408 22:56:32.752779   21772 command_runner.go:130] > BuildTags:      
	I0408 22:56:32.752783   21772 command_runner.go:130] >   containers_image_ostree_stub
	I0408 22:56:32.752787   21772 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0408 22:56:32.752791   21772 command_runner.go:130] >   btrfs_noversion
	I0408 22:56:32.752795   21772 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0408 22:56:32.752800   21772 command_runner.go:130] >   libdm_no_deferred_remove
	I0408 22:56:32.752804   21772 command_runner.go:130] >   seccomp
	I0408 22:56:32.752810   21772 command_runner.go:130] > LDFlags:          unknown
	I0408 22:56:32.752814   21772 command_runner.go:130] > SeccompEnabled:   true
	I0408 22:56:32.752818   21772 command_runner.go:130] > AppArmorEnabled:  false
	I0408 22:56:32.753859   21772 ssh_runner.go:195] Run: crio --version
	I0408 22:56:32.778913   21772 command_runner.go:130] > crio version 1.29.1
	I0408 22:56:32.778948   21772 command_runner.go:130] > Version:        1.29.1
	I0408 22:56:32.778957   21772 command_runner.go:130] > GitCommit:      unknown
	I0408 22:56:32.778962   21772 command_runner.go:130] > GitCommitDate:  unknown
	I0408 22:56:32.778967   21772 command_runner.go:130] > GitTreeState:   clean
	I0408 22:56:32.778975   21772 command_runner.go:130] > BuildDate:      2025-01-14T08:57:58Z
	I0408 22:56:32.778980   21772 command_runner.go:130] > GoVersion:      go1.21.6
	I0408 22:56:32.778986   21772 command_runner.go:130] > Compiler:       gc
	I0408 22:56:32.778993   21772 command_runner.go:130] > Platform:       linux/amd64
	I0408 22:56:32.779002   21772 command_runner.go:130] > Linkmode:       dynamic
	I0408 22:56:32.779012   21772 command_runner.go:130] > BuildTags:      
	I0408 22:56:32.779020   21772 command_runner.go:130] >   containers_image_ostree_stub
	I0408 22:56:32.779030   21772 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0408 22:56:32.779037   21772 command_runner.go:130] >   btrfs_noversion
	I0408 22:56:32.779048   21772 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0408 22:56:32.779056   21772 command_runner.go:130] >   libdm_no_deferred_remove
	I0408 22:56:32.779064   21772 command_runner.go:130] >   seccomp
	I0408 22:56:32.779072   21772 command_runner.go:130] > LDFlags:          unknown
	I0408 22:56:32.779080   21772 command_runner.go:130] > SeccompEnabled:   true
	I0408 22:56:32.779090   21772 command_runner.go:130] > AppArmorEnabled:  false
	I0408 22:56:32.780946   21772 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0408 22:56:32.782109   21772 main.go:141] libmachine: (functional-546336) Calling .GetIP
	I0408 22:56:32.785040   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:56:32.785454   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:56:32.785486   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:56:32.785755   21772 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 22:56:32.789792   21772 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0408 22:56:32.790053   21772 kubeadm.go:883] updating cluster {Name:functional-546336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-5
46336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 22:56:32.790145   21772 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0408 22:56:32.790182   21772 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 22:56:32.827503   21772 command_runner.go:130] > {
	I0408 22:56:32.827524   21772 command_runner.go:130] >   "images": [
	I0408 22:56:32.827528   21772 command_runner.go:130] >     {
	I0408 22:56:32.827537   21772 command_runner.go:130] >       "id": "d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56",
	I0408 22:56:32.827541   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.827547   21772 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241212-9f82dd49"
	I0408 22:56:32.827550   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827554   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.827561   21772 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26",
	I0408 22:56:32.827568   21772 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40"
	I0408 22:56:32.827572   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827576   21772 command_runner.go:130] >       "size": "95714353",
	I0408 22:56:32.827579   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.827583   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.827593   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.827600   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.827603   21772 command_runner.go:130] >     },
	I0408 22:56:32.827606   21772 command_runner.go:130] >     {
	I0408 22:56:32.827611   21772 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0408 22:56:32.827614   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.827620   21772 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0408 22:56:32.827624   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827627   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.827635   21772 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0408 22:56:32.827645   21772 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0408 22:56:32.827649   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827657   21772 command_runner.go:130] >       "size": "31470524",
	I0408 22:56:32.827663   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.827667   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.827670   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.827674   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.827677   21772 command_runner.go:130] >     },
	I0408 22:56:32.827681   21772 command_runner.go:130] >     {
	I0408 22:56:32.827689   21772 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0408 22:56:32.827692   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.827697   21772 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0408 22:56:32.827703   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827706   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.827713   21772 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0408 22:56:32.827720   21772 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0408 22:56:32.827724   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827727   21772 command_runner.go:130] >       "size": "63273227",
	I0408 22:56:32.827731   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.827737   21772 command_runner.go:130] >       "username": "nonroot",
	I0408 22:56:32.827740   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.827754   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.827759   21772 command_runner.go:130] >     },
	I0408 22:56:32.827766   21772 command_runner.go:130] >     {
	I0408 22:56:32.827773   21772 command_runner.go:130] >       "id": "a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc",
	I0408 22:56:32.827777   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.827782   21772 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.16-0"
	I0408 22:56:32.827785   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827791   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.827798   21772 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990",
	I0408 22:56:32.827811   21772 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"
	I0408 22:56:32.827816   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827820   21772 command_runner.go:130] >       "size": "151021823",
	I0408 22:56:32.827824   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.827830   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.827833   21772 command_runner.go:130] >       },
	I0408 22:56:32.827837   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.827840   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.827844   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.827846   21772 command_runner.go:130] >     },
	I0408 22:56:32.827850   21772 command_runner.go:130] >     {
	I0408 22:56:32.827858   21772 command_runner.go:130] >       "id": "85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef",
	I0408 22:56:32.827874   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.827882   21772 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.32.2"
	I0408 22:56:32.827890   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827896   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.827908   21772 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d",
	I0408 22:56:32.827916   21772 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"
	I0408 22:56:32.827922   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827925   21772 command_runner.go:130] >       "size": "98055648",
	I0408 22:56:32.827929   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.827932   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.827936   21772 command_runner.go:130] >       },
	I0408 22:56:32.827949   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.827954   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.827958   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.827966   21772 command_runner.go:130] >     },
	I0408 22:56:32.827970   21772 command_runner.go:130] >     {
	I0408 22:56:32.827976   21772 command_runner.go:130] >       "id": "b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389",
	I0408 22:56:32.827982   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.827987   21772 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.32.2"
	I0408 22:56:32.827993   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827996   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.828003   21772 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5",
	I0408 22:56:32.828013   21772 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90"
	I0408 22:56:32.828019   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828022   21772 command_runner.go:130] >       "size": "90793286",
	I0408 22:56:32.828026   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.828029   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.828033   21772 command_runner.go:130] >       },
	I0408 22:56:32.828036   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.828043   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.828046   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.828049   21772 command_runner.go:130] >     },
	I0408 22:56:32.828052   21772 command_runner.go:130] >     {
	I0408 22:56:32.828058   21772 command_runner.go:130] >       "id": "f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5",
	I0408 22:56:32.828064   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.828069   21772 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.32.2"
	I0408 22:56:32.828074   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828078   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.828085   21772 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d",
	I0408 22:56:32.828094   21772 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d"
	I0408 22:56:32.828097   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828102   21772 command_runner.go:130] >       "size": "95271321",
	I0408 22:56:32.828108   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.828111   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.828115   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.828119   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.828124   21772 command_runner.go:130] >     },
	I0408 22:56:32.828131   21772 command_runner.go:130] >     {
	I0408 22:56:32.828140   21772 command_runner.go:130] >       "id": "d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d",
	I0408 22:56:32.828144   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.828150   21772 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.32.2"
	I0408 22:56:32.828159   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828165   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.828207   21772 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76",
	I0408 22:56:32.828220   21772 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b"
	I0408 22:56:32.828223   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828227   21772 command_runner.go:130] >       "size": "70653254",
	I0408 22:56:32.828230   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.828233   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.828236   21772 command_runner.go:130] >       },
	I0408 22:56:32.828239   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.828243   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.828247   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.828250   21772 command_runner.go:130] >     },
	I0408 22:56:32.828253   21772 command_runner.go:130] >     {
	I0408 22:56:32.828259   21772 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0408 22:56:32.828265   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.828269   21772 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0408 22:56:32.828272   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828276   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.828283   21772 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0408 22:56:32.828292   21772 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0408 22:56:32.828295   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828298   21772 command_runner.go:130] >       "size": "742080",
	I0408 22:56:32.828302   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.828305   21772 command_runner.go:130] >         "value": "65535"
	I0408 22:56:32.828308   21772 command_runner.go:130] >       },
	I0408 22:56:32.828312   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.828318   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.828324   21772 command_runner.go:130] >       "pinned": true
	I0408 22:56:32.828331   21772 command_runner.go:130] >     }
	I0408 22:56:32.828334   21772 command_runner.go:130] >   ]
	I0408 22:56:32.828337   21772 command_runner.go:130] > }
	I0408 22:56:32.829120   21772 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 22:56:32.829135   21772 crio.go:433] Images already preloaded, skipping extraction
	I0408 22:56:32.829174   21772 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 22:56:32.860598   21772 command_runner.go:130] > {
	I0408 22:56:32.860616   21772 command_runner.go:130] >   "images": [
	I0408 22:56:32.860620   21772 command_runner.go:130] >     {
	I0408 22:56:32.860628   21772 command_runner.go:130] >       "id": "d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56",
	I0408 22:56:32.860632   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.860637   21772 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241212-9f82dd49"
	I0408 22:56:32.860641   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860645   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.860658   21772 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26",
	I0408 22:56:32.860666   21772 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40"
	I0408 22:56:32.860669   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860674   21772 command_runner.go:130] >       "size": "95714353",
	I0408 22:56:32.860677   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.860682   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.860690   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.860694   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.860699   21772 command_runner.go:130] >     },
	I0408 22:56:32.860702   21772 command_runner.go:130] >     {
	I0408 22:56:32.860708   21772 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0408 22:56:32.860712   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.860719   21772 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0408 22:56:32.860722   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860727   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.860734   21772 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0408 22:56:32.860742   21772 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0408 22:56:32.860746   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860752   21772 command_runner.go:130] >       "size": "31470524",
	I0408 22:56:32.860757   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.860761   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.860764   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.860768   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.860771   21772 command_runner.go:130] >     },
	I0408 22:56:32.860774   21772 command_runner.go:130] >     {
	I0408 22:56:32.860780   21772 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0408 22:56:32.860784   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.860789   21772 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0408 22:56:32.860793   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860797   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.860805   21772 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0408 22:56:32.860814   21772 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0408 22:56:32.860818   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860828   21772 command_runner.go:130] >       "size": "63273227",
	I0408 22:56:32.860834   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.860838   21772 command_runner.go:130] >       "username": "nonroot",
	I0408 22:56:32.860842   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.860848   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.860851   21772 command_runner.go:130] >     },
	I0408 22:56:32.860854   21772 command_runner.go:130] >     {
	I0408 22:56:32.860860   21772 command_runner.go:130] >       "id": "a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc",
	I0408 22:56:32.860866   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.860871   21772 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.16-0"
	I0408 22:56:32.860878   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860882   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.860891   21772 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990",
	I0408 22:56:32.860905   21772 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"
	I0408 22:56:32.860911   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860915   21772 command_runner.go:130] >       "size": "151021823",
	I0408 22:56:32.860921   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.860925   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.860931   21772 command_runner.go:130] >       },
	I0408 22:56:32.860946   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.860953   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.860957   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.860962   21772 command_runner.go:130] >     },
	I0408 22:56:32.860965   21772 command_runner.go:130] >     {
	I0408 22:56:32.860971   21772 command_runner.go:130] >       "id": "85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef",
	I0408 22:56:32.860977   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.860982   21772 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.32.2"
	I0408 22:56:32.860985   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860990   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.860997   21772 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d",
	I0408 22:56:32.861007   21772 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"
	I0408 22:56:32.861010   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861014   21772 command_runner.go:130] >       "size": "98055648",
	I0408 22:56:32.861024   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.861030   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.861033   21772 command_runner.go:130] >       },
	I0408 22:56:32.861037   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.861043   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.861046   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.861049   21772 command_runner.go:130] >     },
	I0408 22:56:32.861052   21772 command_runner.go:130] >     {
	I0408 22:56:32.861060   21772 command_runner.go:130] >       "id": "b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389",
	I0408 22:56:32.861064   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.861071   21772 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.32.2"
	I0408 22:56:32.861076   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861082   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.861090   21772 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5",
	I0408 22:56:32.861099   21772 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90"
	I0408 22:56:32.861103   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861106   21772 command_runner.go:130] >       "size": "90793286",
	I0408 22:56:32.861110   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.861114   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.861116   21772 command_runner.go:130] >       },
	I0408 22:56:32.861120   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.861126   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.861130   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.861133   21772 command_runner.go:130] >     },
	I0408 22:56:32.861136   21772 command_runner.go:130] >     {
	I0408 22:56:32.861143   21772 command_runner.go:130] >       "id": "f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5",
	I0408 22:56:32.861149   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.861153   21772 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.32.2"
	I0408 22:56:32.861158   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861162   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.861169   21772 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d",
	I0408 22:56:32.861178   21772 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d"
	I0408 22:56:32.861182   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861190   21772 command_runner.go:130] >       "size": "95271321",
	I0408 22:56:32.861196   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.861200   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.861204   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.861207   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.861210   21772 command_runner.go:130] >     },
	I0408 22:56:32.861213   21772 command_runner.go:130] >     {
	I0408 22:56:32.861219   21772 command_runner.go:130] >       "id": "d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d",
	I0408 22:56:32.861224   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.861229   21772 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.32.2"
	I0408 22:56:32.861234   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861238   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.861256   21772 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76",
	I0408 22:56:32.861266   21772 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b"
	I0408 22:56:32.861269   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861273   21772 command_runner.go:130] >       "size": "70653254",
	I0408 22:56:32.861275   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.861279   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.861282   21772 command_runner.go:130] >       },
	I0408 22:56:32.861286   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.861289   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.861293   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.861296   21772 command_runner.go:130] >     },
	I0408 22:56:32.861299   21772 command_runner.go:130] >     {
	I0408 22:56:32.861305   21772 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0408 22:56:32.861314   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.861319   21772 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0408 22:56:32.861322   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861325   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.861332   21772 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0408 22:56:32.861341   21772 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0408 22:56:32.861345   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861349   21772 command_runner.go:130] >       "size": "742080",
	I0408 22:56:32.861357   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.861364   21772 command_runner.go:130] >         "value": "65535"
	I0408 22:56:32.861367   21772 command_runner.go:130] >       },
	I0408 22:56:32.861370   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.861374   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.861380   21772 command_runner.go:130] >       "pinned": true
	I0408 22:56:32.861382   21772 command_runner.go:130] >     }
	I0408 22:56:32.861385   21772 command_runner.go:130] >   ]
	I0408 22:56:32.861388   21772 command_runner.go:130] > }
	I0408 22:56:32.862015   21772 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 22:56:32.862029   21772 cache_images.go:84] Images are preloaded, skipping loading
	I0408 22:56:32.862035   21772 kubeadm.go:934] updating node { 192.168.39.234 8441 v1.32.2 crio true true} ...
	I0408 22:56:32.862119   21772 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-546336 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.234
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:functional-546336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 22:56:32.862176   21772 ssh_runner.go:195] Run: crio config
	I0408 22:56:32.900028   21772 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0408 22:56:32.900049   21772 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0408 22:56:32.900055   21772 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0408 22:56:32.900058   21772 command_runner.go:130] > #
	I0408 22:56:32.900065   21772 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0408 22:56:32.900071   21772 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0408 22:56:32.900077   21772 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0408 22:56:32.900097   21772 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0408 22:56:32.900101   21772 command_runner.go:130] > # reload'.
	I0408 22:56:32.900107   21772 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0408 22:56:32.900113   21772 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0408 22:56:32.900120   21772 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0408 22:56:32.900130   21772 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0408 22:56:32.900135   21772 command_runner.go:130] > [crio]
	I0408 22:56:32.900144   21772 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0408 22:56:32.900152   21772 command_runner.go:130] > # containers images, in this directory.
	I0408 22:56:32.900158   21772 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0408 22:56:32.900171   21772 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0408 22:56:32.900182   21772 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0408 22:56:32.900190   21772 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0408 22:56:32.900199   21772 command_runner.go:130] > # imagestore = ""
	I0408 22:56:32.900205   21772 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0408 22:56:32.900213   21772 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0408 22:56:32.900221   21772 command_runner.go:130] > storage_driver = "overlay"
	I0408 22:56:32.900232   21772 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0408 22:56:32.900240   21772 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0408 22:56:32.900247   21772 command_runner.go:130] > storage_option = [
	I0408 22:56:32.900262   21772 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0408 22:56:32.900275   21772 command_runner.go:130] > ]
	I0408 22:56:32.900286   21772 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0408 22:56:32.900296   21772 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0408 22:56:32.900301   21772 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0408 22:56:32.900307   21772 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0408 22:56:32.900312   21772 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0408 22:56:32.900316   21772 command_runner.go:130] > # always happen on a node reboot
	I0408 22:56:32.900323   21772 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0408 22:56:32.900351   21772 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0408 22:56:32.900362   21772 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0408 22:56:32.900370   21772 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0408 22:56:32.900379   21772 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0408 22:56:32.900389   21772 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0408 22:56:32.900401   21772 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0408 22:56:32.900408   21772 command_runner.go:130] > # internal_wipe = true
	I0408 22:56:32.900421   21772 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0408 22:56:32.900433   21772 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0408 22:56:32.900445   21772 command_runner.go:130] > # internal_repair = false
	I0408 22:56:32.900456   21772 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0408 22:56:32.900465   21772 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0408 22:56:32.900477   21772 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0408 22:56:32.900488   21772 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0408 22:56:32.900500   21772 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0408 22:56:32.900506   21772 command_runner.go:130] > [crio.api]
	I0408 22:56:32.900514   21772 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0408 22:56:32.900524   21772 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0408 22:56:32.900532   21772 command_runner.go:130] > # IP address on which the stream server will listen.
	I0408 22:56:32.900539   21772 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0408 22:56:32.900549   21772 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0408 22:56:32.900559   21772 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0408 22:56:32.900565   21772 command_runner.go:130] > # stream_port = "0"
	I0408 22:56:32.900572   21772 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0408 22:56:32.900581   21772 command_runner.go:130] > # stream_enable_tls = false
	I0408 22:56:32.900589   21772 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0408 22:56:32.900593   21772 command_runner.go:130] > # stream_idle_timeout = ""
	I0408 22:56:32.900601   21772 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0408 22:56:32.900607   21772 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0408 22:56:32.900614   21772 command_runner.go:130] > # minutes.
	I0408 22:56:32.900620   21772 command_runner.go:130] > # stream_tls_cert = ""
	I0408 22:56:32.900631   21772 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0408 22:56:32.900649   21772 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0408 22:56:32.900658   21772 command_runner.go:130] > # stream_tls_key = ""
	I0408 22:56:32.900667   21772 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0408 22:56:32.900679   21772 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0408 22:56:32.900709   21772 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0408 22:56:32.900720   21772 command_runner.go:130] > # stream_tls_ca = ""
	I0408 22:56:32.900732   21772 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0408 22:56:32.900742   21772 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0408 22:56:32.900753   21772 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0408 22:56:32.900763   21772 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0408 22:56:32.900773   21772 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0408 22:56:32.900785   21772 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0408 22:56:32.900793   21772 command_runner.go:130] > [crio.runtime]
	I0408 22:56:32.900803   21772 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0408 22:56:32.900815   21772 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0408 22:56:32.900822   21772 command_runner.go:130] > # "nofile=1024:2048"
	I0408 22:56:32.900832   21772 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0408 22:56:32.900841   21772 command_runner.go:130] > # default_ulimits = [
	I0408 22:56:32.900847   21772 command_runner.go:130] > # ]
	I0408 22:56:32.900860   21772 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0408 22:56:32.900873   21772 command_runner.go:130] > # no_pivot = false
	I0408 22:56:32.900885   21772 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0408 22:56:32.900897   21772 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0408 22:56:32.900907   21772 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0408 22:56:32.900918   21772 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0408 22:56:32.900932   21772 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0408 22:56:32.900959   21772 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0408 22:56:32.900970   21772 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0408 22:56:32.900976   21772 command_runner.go:130] > # Cgroup setting for conmon
	I0408 22:56:32.900987   21772 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0408 22:56:32.900996   21772 command_runner.go:130] > conmon_cgroup = "pod"
	I0408 22:56:32.901006   21772 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0408 22:56:32.901017   21772 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0408 22:56:32.901030   21772 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0408 22:56:32.901038   21772 command_runner.go:130] > conmon_env = [
	I0408 22:56:32.901047   21772 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0408 22:56:32.901055   21772 command_runner.go:130] > ]
	I0408 22:56:32.901064   21772 command_runner.go:130] > # Additional environment variables to set for all the
	I0408 22:56:32.901075   21772 command_runner.go:130] > # containers. These are overridden if set in the
	I0408 22:56:32.901087   21772 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0408 22:56:32.901094   21772 command_runner.go:130] > # default_env = [
	I0408 22:56:32.901103   21772 command_runner.go:130] > # ]
	I0408 22:56:32.901111   21772 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0408 22:56:32.901125   21772 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0408 22:56:32.901134   21772 command_runner.go:130] > # selinux = false
	I0408 22:56:32.901143   21772 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0408 22:56:32.901155   21772 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0408 22:56:32.901167   21772 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0408 22:56:32.901177   21772 command_runner.go:130] > # seccomp_profile = ""
	I0408 22:56:32.901186   21772 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0408 22:56:32.901197   21772 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0408 22:56:32.901207   21772 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0408 22:56:32.901217   21772 command_runner.go:130] > # which might increase security.
	I0408 22:56:32.901225   21772 command_runner.go:130] > # This option is currently deprecated,
	I0408 22:56:32.901237   21772 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0408 22:56:32.901255   21772 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0408 22:56:32.901268   21772 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0408 22:56:32.901288   21772 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0408 22:56:32.901314   21772 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0408 22:56:32.901327   21772 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0408 22:56:32.901335   21772 command_runner.go:130] > # This option supports live configuration reload.
	I0408 22:56:32.901345   21772 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0408 22:56:32.901353   21772 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0408 22:56:32.901362   21772 command_runner.go:130] > # the cgroup blockio controller.
	I0408 22:56:32.901369   21772 command_runner.go:130] > # blockio_config_file = ""
	I0408 22:56:32.901382   21772 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0408 22:56:32.901388   21772 command_runner.go:130] > # blockio parameters.
	I0408 22:56:32.901397   21772 command_runner.go:130] > # blockio_reload = false
	I0408 22:56:32.901407   21772 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0408 22:56:32.901414   21772 command_runner.go:130] > # irqbalance daemon.
	I0408 22:56:32.901419   21772 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0408 22:56:32.901425   21772 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0408 22:56:32.901431   21772 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0408 22:56:32.901438   21772 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0408 22:56:32.901446   21772 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0408 22:56:32.901454   21772 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0408 22:56:32.901461   21772 command_runner.go:130] > # This option supports live configuration reload.
	I0408 22:56:32.901468   21772 command_runner.go:130] > # rdt_config_file = ""
	I0408 22:56:32.901476   21772 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0408 22:56:32.901483   21772 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0408 22:56:32.901522   21772 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0408 22:56:32.901531   21772 command_runner.go:130] > # separate_pull_cgroup = ""
	I0408 22:56:32.901538   21772 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0408 22:56:32.901549   21772 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0408 22:56:32.901555   21772 command_runner.go:130] > # will be added.
	I0408 22:56:32.901562   21772 command_runner.go:130] > # default_capabilities = [
	I0408 22:56:32.901571   21772 command_runner.go:130] > # 	"CHOWN",
	I0408 22:56:32.901577   21772 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0408 22:56:32.901585   21772 command_runner.go:130] > # 	"FSETID",
	I0408 22:56:32.901590   21772 command_runner.go:130] > # 	"FOWNER",
	I0408 22:56:32.901596   21772 command_runner.go:130] > # 	"SETGID",
	I0408 22:56:32.901609   21772 command_runner.go:130] > # 	"SETUID",
	I0408 22:56:32.901618   21772 command_runner.go:130] > # 	"SETPCAP",
	I0408 22:56:32.901622   21772 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0408 22:56:32.901628   21772 command_runner.go:130] > # 	"KILL",
	I0408 22:56:32.901632   21772 command_runner.go:130] > # ]
	I0408 22:56:32.901643   21772 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0408 22:56:32.901657   21772 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0408 22:56:32.901671   21772 command_runner.go:130] > # add_inheritable_capabilities = false
	I0408 22:56:32.901681   21772 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0408 22:56:32.901693   21772 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0408 22:56:32.901702   21772 command_runner.go:130] > default_sysctls = [
	I0408 22:56:32.901710   21772 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0408 22:56:32.901718   21772 command_runner.go:130] > ]
	I0408 22:56:32.901725   21772 command_runner.go:130] > # List of devices on the host that a
	I0408 22:56:32.901738   21772 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0408 22:56:32.901744   21772 command_runner.go:130] > # allowed_devices = [
	I0408 22:56:32.901753   21772 command_runner.go:130] > # 	"/dev/fuse",
	I0408 22:56:32.901759   21772 command_runner.go:130] > # ]
	I0408 22:56:32.901768   21772 command_runner.go:130] > # List of additional devices. specified as
	I0408 22:56:32.901782   21772 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0408 22:56:32.901793   21772 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0408 22:56:32.901802   21772 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0408 22:56:32.901811   21772 command_runner.go:130] > # additional_devices = [
	I0408 22:56:32.901816   21772 command_runner.go:130] > # ]
	I0408 22:56:32.901827   21772 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0408 22:56:32.901834   21772 command_runner.go:130] > # cdi_spec_dirs = [
	I0408 22:56:32.901842   21772 command_runner.go:130] > # 	"/etc/cdi",
	I0408 22:56:32.901848   21772 command_runner.go:130] > # 	"/var/run/cdi",
	I0408 22:56:32.901856   21772 command_runner.go:130] > # ]
	I0408 22:56:32.901866   21772 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0408 22:56:32.901878   21772 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0408 22:56:32.901885   21772 command_runner.go:130] > # Defaults to false.
	I0408 22:56:32.901891   21772 command_runner.go:130] > # device_ownership_from_security_context = false
	I0408 22:56:32.901909   21772 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0408 22:56:32.901922   21772 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0408 22:56:32.901928   21772 command_runner.go:130] > # hooks_dir = [
	I0408 22:56:32.901936   21772 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0408 22:56:32.901950   21772 command_runner.go:130] > # ]
	I0408 22:56:32.901959   21772 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0408 22:56:32.901970   21772 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0408 22:56:32.901979   21772 command_runner.go:130] > # its default mounts from the following two files:
	I0408 22:56:32.901990   21772 command_runner.go:130] > #
	I0408 22:56:32.902004   21772 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0408 22:56:32.902015   21772 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0408 22:56:32.902024   21772 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0408 22:56:32.902033   21772 command_runner.go:130] > #
	I0408 22:56:32.902042   21772 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0408 22:56:32.902054   21772 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0408 22:56:32.902067   21772 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0408 22:56:32.902078   21772 command_runner.go:130] > #      only add mounts it finds in this file.
	I0408 22:56:32.902083   21772 command_runner.go:130] > #
	I0408 22:56:32.902092   21772 command_runner.go:130] > # default_mounts_file = ""
	I0408 22:56:32.902103   21772 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0408 22:56:32.902115   21772 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0408 22:56:32.902125   21772 command_runner.go:130] > pids_limit = 1024
	I0408 22:56:32.902135   21772 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0408 22:56:32.902144   21772 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0408 22:56:32.902151   21772 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0408 22:56:32.902166   21772 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0408 22:56:32.902177   21772 command_runner.go:130] > # log_size_max = -1
	I0408 22:56:32.902187   21772 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0408 22:56:32.902194   21772 command_runner.go:130] > # log_to_journald = false
	I0408 22:56:32.902206   21772 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0408 22:56:32.902216   21772 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0408 22:56:32.902224   21772 command_runner.go:130] > # Path to directory for container attach sockets.
	I0408 22:56:32.902234   21772 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0408 22:56:32.902254   21772 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0408 22:56:32.902264   21772 command_runner.go:130] > # bind_mount_prefix = ""
	I0408 22:56:32.902272   21772 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0408 22:56:32.902281   21772 command_runner.go:130] > # read_only = false
	I0408 22:56:32.902290   21772 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0408 22:56:32.902303   21772 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0408 22:56:32.902311   21772 command_runner.go:130] > # live configuration reload.
	I0408 22:56:32.902315   21772 command_runner.go:130] > # log_level = "info"
	I0408 22:56:32.902325   21772 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0408 22:56:32.902334   21772 command_runner.go:130] > # This option supports live configuration reload.
	I0408 22:56:32.902343   21772 command_runner.go:130] > # log_filter = ""
	I0408 22:56:32.902352   21772 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0408 22:56:32.902366   21772 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0408 22:56:32.902373   21772 command_runner.go:130] > # separated by comma.
	I0408 22:56:32.902387   21772 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0408 22:56:32.902396   21772 command_runner.go:130] > # uid_mappings = ""
	I0408 22:56:32.902405   21772 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0408 22:56:32.902417   21772 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0408 22:56:32.902427   21772 command_runner.go:130] > # separated by comma.
	I0408 22:56:32.902442   21772 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0408 22:56:32.902450   21772 command_runner.go:130] > # gid_mappings = ""
	I0408 22:56:32.902459   21772 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0408 22:56:32.902472   21772 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0408 22:56:32.902481   21772 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0408 22:56:32.902489   21772 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0408 22:56:32.902499   21772 command_runner.go:130] > # minimum_mappable_uid = -1
	I0408 22:56:32.902508   21772 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0408 22:56:32.902521   21772 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0408 22:56:32.902533   21772 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0408 22:56:32.902545   21772 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0408 22:56:32.902554   21772 command_runner.go:130] > # minimum_mappable_gid = -1
	I0408 22:56:32.902563   21772 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0408 22:56:32.902571   21772 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0408 22:56:32.902584   21772 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0408 22:56:32.902595   21772 command_runner.go:130] > # ctr_stop_timeout = 30
	I0408 22:56:32.902608   21772 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0408 22:56:32.902619   21772 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0408 22:56:32.902629   21772 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0408 22:56:32.902637   21772 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0408 22:56:32.902646   21772 command_runner.go:130] > drop_infra_ctr = false
	I0408 22:56:32.902653   21772 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0408 22:56:32.902661   21772 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0408 22:56:32.902672   21772 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0408 22:56:32.902683   21772 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0408 22:56:32.902696   21772 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0408 22:56:32.902708   21772 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0408 22:56:32.902719   21772 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0408 22:56:32.902730   21772 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0408 22:56:32.902735   21772 command_runner.go:130] > # shared_cpuset = ""
	I0408 22:56:32.902740   21772 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0408 22:56:32.902747   21772 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0408 22:56:32.902753   21772 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0408 22:56:32.902767   21772 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0408 22:56:32.902777   21772 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0408 22:56:32.902789   21772 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0408 22:56:32.902801   21772 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0408 22:56:32.902811   21772 command_runner.go:130] > # enable_criu_support = false
	I0408 22:56:32.902820   21772 command_runner.go:130] > # Enable/disable the generation of the container,
	I0408 22:56:32.902826   21772 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0408 22:56:32.902834   21772 command_runner.go:130] > # enable_pod_events = false
	I0408 22:56:32.902844   21772 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0408 22:56:32.902857   21772 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0408 22:56:32.902867   21772 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0408 22:56:32.902873   21772 command_runner.go:130] > # default_runtime = "runc"
	I0408 22:56:32.902884   21772 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0408 22:56:32.902897   21772 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0408 22:56:32.902917   21772 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0408 22:56:32.902928   21772 command_runner.go:130] > # creation as a file is not desired either.
	I0408 22:56:32.902945   21772 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0408 22:56:32.902956   21772 command_runner.go:130] > # the hostname is being managed dynamically.
	I0408 22:56:32.902962   21772 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0408 22:56:32.902970   21772 command_runner.go:130] > # ]
	I0408 22:56:32.902983   21772 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0408 22:56:32.902993   21772 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0408 22:56:32.903002   21772 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0408 22:56:32.903013   21772 command_runner.go:130] > # Each entry in the table should follow the format:
	I0408 22:56:32.903022   21772 command_runner.go:130] > #
	I0408 22:56:32.903029   21772 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0408 22:56:32.903039   21772 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0408 22:56:32.903114   21772 command_runner.go:130] > # runtime_type = "oci"
	I0408 22:56:32.903129   21772 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0408 22:56:32.903136   21772 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0408 22:56:32.903142   21772 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0408 22:56:32.903150   21772 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0408 22:56:32.903156   21772 command_runner.go:130] > # monitor_env = []
	I0408 22:56:32.903164   21772 command_runner.go:130] > # privileged_without_host_devices = false
	I0408 22:56:32.903171   21772 command_runner.go:130] > # allowed_annotations = []
	I0408 22:56:32.903177   21772 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0408 22:56:32.903186   21772 command_runner.go:130] > # Where:
	I0408 22:56:32.903195   21772 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0408 22:56:32.903207   21772 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0408 22:56:32.903220   21772 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0408 22:56:32.903235   21772 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0408 22:56:32.903243   21772 command_runner.go:130] > #   in $PATH.
	I0408 22:56:32.903253   21772 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0408 22:56:32.903260   21772 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0408 22:56:32.903267   21772 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0408 22:56:32.903275   21772 command_runner.go:130] > #   state.
	I0408 22:56:32.903291   21772 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0408 22:56:32.903308   21772 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0408 22:56:32.903321   21772 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0408 22:56:32.903329   21772 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0408 22:56:32.903340   21772 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0408 22:56:32.903348   21772 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0408 22:56:32.903355   21772 command_runner.go:130] > #   The currently recognized values are:
	I0408 22:56:32.903368   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0408 22:56:32.903382   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0408 22:56:32.903394   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0408 22:56:32.903404   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0408 22:56:32.903418   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0408 22:56:32.903429   21772 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0408 22:56:32.903443   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0408 22:56:32.903456   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0408 22:56:32.903467   21772 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0408 22:56:32.903479   21772 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0408 22:56:32.903489   21772 command_runner.go:130] > #   deprecated option "conmon".
	I0408 22:56:32.903501   21772 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0408 22:56:32.903513   21772 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0408 22:56:32.903527   21772 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0408 22:56:32.903538   21772 command_runner.go:130] > #   should be moved to the container's cgroup
	I0408 22:56:32.903548   21772 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0408 22:56:32.903557   21772 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0408 22:56:32.903568   21772 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0408 22:56:32.903577   21772 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0408 22:56:32.903580   21772 command_runner.go:130] > #
	I0408 22:56:32.903588   21772 command_runner.go:130] > # Using the seccomp notifier feature:
	I0408 22:56:32.903595   21772 command_runner.go:130] > #
	I0408 22:56:32.903604   21772 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0408 22:56:32.903618   21772 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0408 22:56:32.903622   21772 command_runner.go:130] > #
	I0408 22:56:32.903632   21772 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0408 22:56:32.903644   21772 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0408 22:56:32.903657   21772 command_runner.go:130] > #
	I0408 22:56:32.903669   21772 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0408 22:56:32.903678   21772 command_runner.go:130] > # feature.
	I0408 22:56:32.903682   21772 command_runner.go:130] > #
	I0408 22:56:32.903694   21772 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0408 22:56:32.903706   21772 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0408 22:56:32.903718   21772 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0408 22:56:32.903728   21772 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0408 22:56:32.903739   21772 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0408 22:56:32.903747   21772 command_runner.go:130] > #
	I0408 22:56:32.903756   21772 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0408 22:56:32.903766   21772 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0408 22:56:32.903769   21772 command_runner.go:130] > #
	I0408 22:56:32.903777   21772 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0408 22:56:32.903789   21772 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0408 22:56:32.903797   21772 command_runner.go:130] > #
	I0408 22:56:32.903805   21772 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0408 22:56:32.903816   21772 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0408 22:56:32.903828   21772 command_runner.go:130] > # limitation.
	I0408 22:56:32.903839   21772 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0408 22:56:32.903846   21772 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0408 22:56:32.903850   21772 command_runner.go:130] > runtime_type = "oci"
	I0408 22:56:32.903854   21772 command_runner.go:130] > runtime_root = "/run/runc"
	I0408 22:56:32.903860   21772 command_runner.go:130] > runtime_config_path = ""
	I0408 22:56:32.903881   21772 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0408 22:56:32.903890   21772 command_runner.go:130] > monitor_cgroup = "pod"
	I0408 22:56:32.903896   21772 command_runner.go:130] > monitor_exec_cgroup = ""
	I0408 22:56:32.903905   21772 command_runner.go:130] > monitor_env = [
	I0408 22:56:32.903914   21772 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0408 22:56:32.903922   21772 command_runner.go:130] > ]
	I0408 22:56:32.903929   21772 command_runner.go:130] > privileged_without_host_devices = false
	I0408 22:56:32.903943   21772 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0408 22:56:32.903954   21772 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0408 22:56:32.903974   21772 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0408 22:56:32.903992   21772 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0408 22:56:32.904007   21772 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0408 22:56:32.904018   21772 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0408 22:56:32.904031   21772 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0408 22:56:32.904046   21772 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0408 22:56:32.904059   21772 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0408 22:56:32.904070   21772 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0408 22:56:32.904078   21772 command_runner.go:130] > # Example:
	I0408 22:56:32.904085   21772 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0408 22:56:32.904096   21772 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0408 22:56:32.904104   21772 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0408 22:56:32.904109   21772 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0408 22:56:32.904116   21772 command_runner.go:130] > # cpuset = 0
	I0408 22:56:32.904122   21772 command_runner.go:130] > # cpushares = "0-1"
	I0408 22:56:32.904131   21772 command_runner.go:130] > # Where:
	I0408 22:56:32.904138   21772 command_runner.go:130] > # The workload name is workload-type.
	I0408 22:56:32.904151   21772 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0408 22:56:32.904162   21772 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0408 22:56:32.904171   21772 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0408 22:56:32.904185   21772 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0408 22:56:32.904195   21772 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0408 22:56:32.904202   21772 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0408 22:56:32.904216   21772 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0408 22:56:32.904226   21772 command_runner.go:130] > # Default value is set to true
	I0408 22:56:32.904232   21772 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0408 22:56:32.904244   21772 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0408 22:56:32.904253   21772 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0408 22:56:32.904260   21772 command_runner.go:130] > # Default value is set to 'false'
	I0408 22:56:32.904267   21772 command_runner.go:130] > # disable_hostport_mapping = false
	I0408 22:56:32.904275   21772 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0408 22:56:32.904280   21772 command_runner.go:130] > #
	I0408 22:56:32.904288   21772 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0408 22:56:32.904307   21772 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0408 22:56:32.904322   21772 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0408 22:56:32.904335   21772 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0408 22:56:32.904349   21772 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0408 22:56:32.904357   21772 command_runner.go:130] > [crio.image]
	I0408 22:56:32.904363   21772 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0408 22:56:32.904371   21772 command_runner.go:130] > # default_transport = "docker://"
	I0408 22:56:32.904382   21772 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0408 22:56:32.904394   21772 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0408 22:56:32.904404   21772 command_runner.go:130] > # global_auth_file = ""
	I0408 22:56:32.904411   21772 command_runner.go:130] > # The image used to instantiate infra containers.
	I0408 22:56:32.904421   21772 command_runner.go:130] > # This option supports live configuration reload.
	I0408 22:56:32.904431   21772 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0408 22:56:32.904441   21772 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0408 22:56:32.904449   21772 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0408 22:56:32.904454   21772 command_runner.go:130] > # This option supports live configuration reload.
	I0408 22:56:32.904459   21772 command_runner.go:130] > # pause_image_auth_file = ""
	I0408 22:56:32.904464   21772 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0408 22:56:32.904472   21772 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0408 22:56:32.904481   21772 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0408 22:56:32.904494   21772 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0408 22:56:32.904503   21772 command_runner.go:130] > # pause_command = "/pause"
	I0408 22:56:32.904511   21772 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0408 22:56:32.904551   21772 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0408 22:56:32.904556   21772 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0408 22:56:32.904564   21772 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0408 22:56:32.904569   21772 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0408 22:56:32.904578   21772 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0408 22:56:32.904584   21772 command_runner.go:130] > # pinned_images = [
	I0408 22:56:32.904592   21772 command_runner.go:130] > # ]
	I0408 22:56:32.904600   21772 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0408 22:56:32.904607   21772 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0408 22:56:32.904615   21772 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0408 22:56:32.904629   21772 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0408 22:56:32.904642   21772 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0408 22:56:32.904651   21772 command_runner.go:130] > # signature_policy = ""
	I0408 22:56:32.904660   21772 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0408 22:56:32.904672   21772 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0408 22:56:32.904681   21772 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0408 22:56:32.904694   21772 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0408 22:56:32.904702   21772 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0408 22:56:32.904707   21772 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0408 22:56:32.904714   21772 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0408 22:56:32.904720   21772 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0408 22:56:32.904723   21772 command_runner.go:130] > # changing them here.
	I0408 22:56:32.904726   21772 command_runner.go:130] > # insecure_registries = [
	I0408 22:56:32.904729   21772 command_runner.go:130] > # ]
	I0408 22:56:32.904735   21772 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0408 22:56:32.904739   21772 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0408 22:56:32.904743   21772 command_runner.go:130] > # image_volumes = "mkdir"
	I0408 22:56:32.904747   21772 command_runner.go:130] > # Temporary directory to use for storing big files
	I0408 22:56:32.904751   21772 command_runner.go:130] > # big_files_temporary_dir = ""
	I0408 22:56:32.904756   21772 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0408 22:56:32.904760   21772 command_runner.go:130] > # CNI plugins.
	I0408 22:56:32.904763   21772 command_runner.go:130] > [crio.network]
	I0408 22:56:32.904768   21772 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0408 22:56:32.904773   21772 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0408 22:56:32.904777   21772 command_runner.go:130] > # cni_default_network = ""
	I0408 22:56:32.904782   21772 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0408 22:56:32.904786   21772 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0408 22:56:32.904791   21772 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0408 22:56:32.904794   21772 command_runner.go:130] > # plugin_dirs = [
	I0408 22:56:32.904798   21772 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0408 22:56:32.904800   21772 command_runner.go:130] > # ]
	I0408 22:56:32.904805   21772 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0408 22:56:32.904809   21772 command_runner.go:130] > [crio.metrics]
	I0408 22:56:32.904818   21772 command_runner.go:130] > # Globally enable or disable metrics support.
	I0408 22:56:32.904821   21772 command_runner.go:130] > enable_metrics = true
	I0408 22:56:32.904825   21772 command_runner.go:130] > # Specify enabled metrics collectors.
	I0408 22:56:32.904829   21772 command_runner.go:130] > # Per default all metrics are enabled.
	I0408 22:56:32.904834   21772 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0408 22:56:32.904840   21772 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0408 22:56:32.904847   21772 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0408 22:56:32.904853   21772 command_runner.go:130] > # metrics_collectors = [
	I0408 22:56:32.904859   21772 command_runner.go:130] > # 	"operations",
	I0408 22:56:32.904866   21772 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0408 22:56:32.904871   21772 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0408 22:56:32.904875   21772 command_runner.go:130] > # 	"operations_errors",
	I0408 22:56:32.904879   21772 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0408 22:56:32.904882   21772 command_runner.go:130] > # 	"image_pulls_by_name",
	I0408 22:56:32.904888   21772 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0408 22:56:32.904892   21772 command_runner.go:130] > # 	"image_pulls_failures",
	I0408 22:56:32.904895   21772 command_runner.go:130] > # 	"image_pulls_successes",
	I0408 22:56:32.904899   21772 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0408 22:56:32.904903   21772 command_runner.go:130] > # 	"image_layer_reuse",
	I0408 22:56:32.904907   21772 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0408 22:56:32.904911   21772 command_runner.go:130] > # 	"containers_oom_total",
	I0408 22:56:32.904915   21772 command_runner.go:130] > # 	"containers_oom",
	I0408 22:56:32.904918   21772 command_runner.go:130] > # 	"processes_defunct",
	I0408 22:56:32.904922   21772 command_runner.go:130] > # 	"operations_total",
	I0408 22:56:32.904929   21772 command_runner.go:130] > # 	"operations_latency_seconds",
	I0408 22:56:32.904933   21772 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0408 22:56:32.904937   21772 command_runner.go:130] > # 	"operations_errors_total",
	I0408 22:56:32.904947   21772 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0408 22:56:32.904955   21772 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0408 22:56:32.904959   21772 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0408 22:56:32.904963   21772 command_runner.go:130] > # 	"image_pulls_success_total",
	I0408 22:56:32.904967   21772 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0408 22:56:32.904971   21772 command_runner.go:130] > # 	"containers_oom_count_total",
	I0408 22:56:32.904981   21772 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0408 22:56:32.904988   21772 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0408 22:56:32.904991   21772 command_runner.go:130] > # ]
	I0408 22:56:32.905000   21772 command_runner.go:130] > # The port on which the metrics server will listen.
	I0408 22:56:32.905006   21772 command_runner.go:130] > # metrics_port = 9090
	I0408 22:56:32.905011   21772 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0408 22:56:32.905014   21772 command_runner.go:130] > # metrics_socket = ""
	I0408 22:56:32.905019   21772 command_runner.go:130] > # The certificate for the secure metrics server.
	I0408 22:56:32.905024   21772 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0408 22:56:32.905033   21772 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0408 22:56:32.905037   21772 command_runner.go:130] > # certificate on any modification event.
	I0408 22:56:32.905043   21772 command_runner.go:130] > # metrics_cert = ""
	I0408 22:56:32.905048   21772 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0408 22:56:32.905052   21772 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0408 22:56:32.905058   21772 command_runner.go:130] > # metrics_key = ""
	I0408 22:56:32.905064   21772 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0408 22:56:32.905070   21772 command_runner.go:130] > [crio.tracing]
	I0408 22:56:32.905075   21772 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0408 22:56:32.905079   21772 command_runner.go:130] > # enable_tracing = false
	I0408 22:56:32.905087   21772 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0408 22:56:32.905091   21772 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0408 22:56:32.905097   21772 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0408 22:56:32.905104   21772 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0408 22:56:32.905108   21772 command_runner.go:130] > # CRI-O NRI configuration.
	I0408 22:56:32.905113   21772 command_runner.go:130] > [crio.nri]
	I0408 22:56:32.905117   21772 command_runner.go:130] > # Globally enable or disable NRI.
	I0408 22:56:32.905125   21772 command_runner.go:130] > # enable_nri = false
	I0408 22:56:32.905129   21772 command_runner.go:130] > # NRI socket to listen on.
	I0408 22:56:32.905136   21772 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0408 22:56:32.905139   21772 command_runner.go:130] > # NRI plugin directory to use.
	I0408 22:56:32.905144   21772 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0408 22:56:32.905148   21772 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0408 22:56:32.905155   21772 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0408 22:56:32.905164   21772 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0408 22:56:32.905171   21772 command_runner.go:130] > # nri_disable_connections = false
	I0408 22:56:32.905175   21772 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0408 22:56:32.905182   21772 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0408 22:56:32.905186   21772 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0408 22:56:32.905193   21772 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0408 22:56:32.905199   21772 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0408 22:56:32.905204   21772 command_runner.go:130] > [crio.stats]
	I0408 22:56:32.905210   21772 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0408 22:56:32.905217   21772 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0408 22:56:32.905223   21772 command_runner.go:130] > # stats_collection_period = 0
	I0408 22:56:32.905256   21772 command_runner.go:130] ! time="2025-04-08 22:56:32.868436253Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0408 22:56:32.905274   21772 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0408 22:56:32.905342   21772 cni.go:84] Creating CNI manager for ""
	I0408 22:56:32.905354   21772 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 22:56:32.905364   21772 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 22:56:32.905388   21772 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.234 APIServerPort:8441 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-546336 NodeName:functional-546336 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.234"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.234 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 22:56:32.905493   21772 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.234
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-546336"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.234"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.234"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 22:56:32.905580   21772 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0408 22:56:32.914548   21772 command_runner.go:130] > kubeadm
	I0408 22:56:32.914564   21772 command_runner.go:130] > kubectl
	I0408 22:56:32.914568   21772 command_runner.go:130] > kubelet
	I0408 22:56:32.914646   21772 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 22:56:32.914718   21772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 22:56:32.923150   21772 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0408 22:56:32.938212   21772 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 22:56:32.953395   21772 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I0408 22:56:32.968282   21772 ssh_runner.go:195] Run: grep 192.168.39.234	control-plane.minikube.internal$ /etc/hosts
	I0408 22:56:32.971857   21772 command_runner.go:130] > 192.168.39.234	control-plane.minikube.internal
	I0408 22:56:32.971923   21772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 22:56:33.097315   21772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 22:56:33.112048   21772 certs.go:68] Setting up /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336 for IP: 192.168.39.234
	I0408 22:56:33.112066   21772 certs.go:194] generating shared ca certs ...
	I0408 22:56:33.112083   21772 certs.go:226] acquiring lock for ca certs: {Name:mk0d455aae85017ac942481bbc1202ccedea144f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 22:56:33.112251   21772 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.key
	I0408 22:56:33.112294   21772 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.key
	I0408 22:56:33.112308   21772 certs.go:256] generating profile certs ...
	I0408 22:56:33.112383   21772 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/client.key
	I0408 22:56:33.112451   21772 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/apiserver.key.848fae18
	I0408 22:56:33.112486   21772 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/proxy-client.key
	I0408 22:56:33.112495   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0408 22:56:33.112506   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0408 22:56:33.112517   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0408 22:56:33.112526   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0408 22:56:33.112540   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0408 22:56:33.112552   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0408 22:56:33.112561   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0408 22:56:33.112572   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0408 22:56:33.112624   21772 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314.pem (1338 bytes)
	W0408 22:56:33.112665   21772 certs.go:480] ignoring /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314_empty.pem, impossibly tiny 0 bytes
	I0408 22:56:33.112678   21772 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 22:56:33.112704   21772 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem (1082 bytes)
	I0408 22:56:33.112735   21772 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem (1123 bytes)
	I0408 22:56:33.112774   21772 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem (1675 bytes)
	I0408 22:56:33.112819   21772 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem (1708 bytes)
	I0408 22:56:33.112860   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem -> /usr/share/ca-certificates/163142.pem
	I0408 22:56:33.112879   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0408 22:56:33.112897   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314.pem -> /usr/share/ca-certificates/16314.pem
	I0408 22:56:33.113475   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 22:56:33.137877   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 22:56:33.159070   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 22:56:33.185298   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0408 22:56:33.207770   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0408 22:56:33.228856   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0408 22:56:33.251027   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 22:56:33.272315   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 22:56:33.294625   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem --> /usr/share/ca-certificates/163142.pem (1708 bytes)
	I0408 22:56:33.316217   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 22:56:33.337786   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314.pem --> /usr/share/ca-certificates/16314.pem (1338 bytes)
	I0408 22:56:33.358722   21772 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0408 22:56:33.373131   21772 ssh_runner.go:195] Run: openssl version
	I0408 22:56:33.378702   21772 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0408 22:56:33.378755   21772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163142.pem && ln -fs /usr/share/ca-certificates/163142.pem /etc/ssl/certs/163142.pem"
	I0408 22:56:33.388262   21772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163142.pem
	I0408 22:56:33.392059   21772 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr  8 22:53 /usr/share/ca-certificates/163142.pem
	I0408 22:56:33.392090   21772 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 22:53 /usr/share/ca-certificates/163142.pem
	I0408 22:56:33.392135   21772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163142.pem
	I0408 22:56:33.397236   21772 command_runner.go:130] > 3ec20f2e
	I0408 22:56:33.397295   21772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163142.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 22:56:33.405382   21772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 22:56:33.414578   21772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 22:56:33.418346   21772 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr  8 22:46 /usr/share/ca-certificates/minikubeCA.pem
	I0408 22:56:33.418448   21772 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 22:46 /usr/share/ca-certificates/minikubeCA.pem
	I0408 22:56:33.418490   21772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 22:56:33.423400   21772 command_runner.go:130] > b5213941
	I0408 22:56:33.423452   21772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 22:56:33.431557   21772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16314.pem && ln -fs /usr/share/ca-certificates/16314.pem /etc/ssl/certs/16314.pem"
	I0408 22:56:33.442046   21772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16314.pem
	I0408 22:56:33.446095   21772 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr  8 22:53 /usr/share/ca-certificates/16314.pem
	I0408 22:56:33.446156   21772 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 22:53 /usr/share/ca-certificates/16314.pem
	I0408 22:56:33.446198   21772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16314.pem
	I0408 22:56:33.451257   21772 command_runner.go:130] > 51391683
	I0408 22:56:33.451490   21772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16314.pem /etc/ssl/certs/51391683.0"
	I0408 22:56:33.460149   21772 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 22:56:33.463927   21772 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 22:56:33.463942   21772 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0408 22:56:33.463948   21772 command_runner.go:130] > Device: 253,1	Inode: 7338542     Links: 1
	I0408 22:56:33.463973   21772 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0408 22:56:33.463986   21772 command_runner.go:130] > Access: 2025-04-08 22:54:04.578144403 +0000
	I0408 22:56:33.463994   21772 command_runner.go:130] > Modify: 2025-04-08 22:54:04.578144403 +0000
	I0408 22:56:33.464003   21772 command_runner.go:130] > Change: 2025-04-08 22:54:04.578144403 +0000
	I0408 22:56:33.464008   21772 command_runner.go:130] >  Birth: 2025-04-08 22:54:04.578144403 +0000
	I0408 22:56:33.464063   21772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 22:56:33.469050   21772 command_runner.go:130] > Certificate will not expire
	I0408 22:56:33.469263   21772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 22:56:33.474068   21772 command_runner.go:130] > Certificate will not expire
	I0408 22:56:33.474186   21772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 22:56:33.478955   21772 command_runner.go:130] > Certificate will not expire
	I0408 22:56:33.479120   21772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 22:56:33.484075   21772 command_runner.go:130] > Certificate will not expire
	I0408 22:56:33.484130   21772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 22:56:33.488910   21772 command_runner.go:130] > Certificate will not expire
	I0408 22:56:33.488951   21772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 22:56:33.493716   21772 command_runner.go:130] > Certificate will not expire
	I0408 22:56:33.493900   21772 kubeadm.go:392] StartCluster: {Name:functional-546336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-5463
36 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 22:56:33.493993   21772 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 22:56:33.494051   21772 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 22:56:33.531075   21772 command_runner.go:130] > f5383886ea44f313131ed72cfa49949bd8b5d2f4873b4e32d81e980ce2940fd0
	I0408 22:56:33.531123   21772 command_runner.go:130] > c7dc125c272a5b82818d376a354500ce1464ae56dbc755c0a0779f8c284bfc5c
	I0408 22:56:33.531134   21772 command_runner.go:130] > 0b6b87627794e032e13a615d5ea5b991ffef3843bb9ae6e9f153041eb733d782
	I0408 22:56:33.531145   21772 command_runner.go:130] > d19ba2e208f70c1259cbd17e3566fbc44b8b4d173fc3026591fa8cea6fad11ec
	I0408 22:56:33.531154   21772 command_runner.go:130] > a02e7488bb5d2cf6fe89c9af4932fa61408d59b64f5415e68dd23aef1e5f092c
	I0408 22:56:33.531170   21772 command_runner.go:130] > e50303177ff5685821e81bebd1db71a63709a33fc7ff89cb111d923979605c70
	I0408 22:56:33.531180   21772 command_runner.go:130] > d31c5cb795e76e9631c155d0b7e96672025f714ef26b9972bd87e759a40f7e4d
	I0408 22:56:33.531194   21772 command_runner.go:130] > 090c0b802b3a3a27f9446836815daacc48a4cfa1ed0b54043325e0eada99d664
	I0408 22:56:33.531207   21772 command_runner.go:130] > f5685f897beb5418ec57fb5f80ab70b0ffe4b406ecf635a6249b528e65cabfc4
	I0408 22:56:33.531221   21772 command_runner.go:130] > 31aa14fb57438b0d736b005aed16b4fb5438a2d6ce4af59f042b06d0271bcaa5
	I0408 22:56:33.531245   21772 cri.go:89] found id: "f5383886ea44f313131ed72cfa49949bd8b5d2f4873b4e32d81e980ce2940fd0"
	I0408 22:56:33.531257   21772 cri.go:89] found id: "c7dc125c272a5b82818d376a354500ce1464ae56dbc755c0a0779f8c284bfc5c"
	I0408 22:56:33.531266   21772 cri.go:89] found id: "0b6b87627794e032e13a615d5ea5b991ffef3843bb9ae6e9f153041eb733d782"
	I0408 22:56:33.531275   21772 cri.go:89] found id: "d19ba2e208f70c1259cbd17e3566fbc44b8b4d173fc3026591fa8cea6fad11ec"
	I0408 22:56:33.531284   21772 cri.go:89] found id: "a02e7488bb5d2cf6fe89c9af4932fa61408d59b64f5415e68dd23aef1e5f092c"
	I0408 22:56:33.531294   21772 cri.go:89] found id: "e50303177ff5685821e81bebd1db71a63709a33fc7ff89cb111d923979605c70"
	I0408 22:56:33.531302   21772 cri.go:89] found id: "d31c5cb795e76e9631c155d0b7e96672025f714ef26b9972bd87e759a40f7e4d"
	I0408 22:56:33.531308   21772 cri.go:89] found id: "090c0b802b3a3a27f9446836815daacc48a4cfa1ed0b54043325e0eada99d664"
	I0408 22:56:33.531312   21772 cri.go:89] found id: "f5685f897beb5418ec57fb5f80ab70b0ffe4b406ecf635a6249b528e65cabfc4"
	I0408 22:56:33.531318   21772 cri.go:89] found id: "31aa14fb57438b0d736b005aed16b4fb5438a2d6ce4af59f042b06d0271bcaa5"
	I0408 22:56:33.531323   21772 cri.go:89] found id: ""
	I0408 22:56:33.531374   21772 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
functional_test.go:678: failed to soft start minikube. args "out/minikube-linux-amd64 start -p functional-546336 --alsologtostderr -v=8": exit status 109
functional_test.go:680: soft start took 13m52.946855109s for "functional-546336" cluster.
I0408 23:08:46.657936   16314 config.go:182] Loaded profile config "functional-546336": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-546336 -n functional-546336
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-546336 -n functional-546336: exit status 2 (218.75983ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/SoftStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-546336 logs -n 25
E0408 23:09:21.008034   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
E0408 23:12:57.931782   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-546336 logs -n 25: (5m36.250459013s)
helpers_test.go:252: TestFunctional/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| addons  | addons-355098 addons           | addons-355098     | jenkins | v1.35.0 | 08 Apr 25 22:49 UTC | 08 Apr 25 22:49 UTC |
	|         | disable csi-hostpath-driver    |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                   |         |         |                     |                     |
	| ip      | addons-355098 ip               | addons-355098     | jenkins | v1.35.0 | 08 Apr 25 22:51 UTC | 08 Apr 25 22:51 UTC |
	| addons  | addons-355098 addons disable   | addons-355098     | jenkins | v1.35.0 | 08 Apr 25 22:51 UTC | 08 Apr 25 22:51 UTC |
	|         | ingress-dns --alsologtostderr  |                   |         |         |                     |                     |
	|         | -v=1                           |                   |         |         |                     |                     |
	| addons  | addons-355098 addons disable   | addons-355098     | jenkins | v1.35.0 | 08 Apr 25 22:51 UTC | 08 Apr 25 22:51 UTC |
	|         | ingress --alsologtostderr -v=1 |                   |         |         |                     |                     |
	| stop    | -p addons-355098               | addons-355098     | jenkins | v1.35.0 | 08 Apr 25 22:51 UTC | 08 Apr 25 22:52 UTC |
	| addons  | enable dashboard -p            | addons-355098     | jenkins | v1.35.0 | 08 Apr 25 22:52 UTC | 08 Apr 25 22:52 UTC |
	|         | addons-355098                  |                   |         |         |                     |                     |
	| addons  | disable dashboard -p           | addons-355098     | jenkins | v1.35.0 | 08 Apr 25 22:52 UTC | 08 Apr 25 22:52 UTC |
	|         | addons-355098                  |                   |         |         |                     |                     |
	| addons  | disable gvisor -p              | addons-355098     | jenkins | v1.35.0 | 08 Apr 25 22:52 UTC | 08 Apr 25 22:52 UTC |
	|         | addons-355098                  |                   |         |         |                     |                     |
	| delete  | -p addons-355098               | addons-355098     | jenkins | v1.35.0 | 08 Apr 25 22:52 UTC | 08 Apr 25 22:52 UTC |
	| start   | -p nospam-715453 -n=1          | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:52 UTC | 08 Apr 25 22:53 UTC |
	|         | --memory=2250 --wait=false     |                   |         |         |                     |                     |
	|         | --log_dir=/tmp/nospam-715453   |                   |         |         |                     |                     |
	|         | --driver=kvm2                  |                   |         |         |                     |                     |
	|         | --container-runtime=crio       |                   |         |         |                     |                     |
	| start   | nospam-715453 --log_dir        | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC |                     |
	|         | /tmp/nospam-715453 start       |                   |         |         |                     |                     |
	|         | --dry-run                      |                   |         |         |                     |                     |
	| start   | nospam-715453 --log_dir        | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC |                     |
	|         | /tmp/nospam-715453 start       |                   |         |         |                     |                     |
	|         | --dry-run                      |                   |         |         |                     |                     |
	| start   | nospam-715453 --log_dir        | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC |                     |
	|         | /tmp/nospam-715453 start       |                   |         |         |                     |                     |
	|         | --dry-run                      |                   |         |         |                     |                     |
	| pause   | nospam-715453 --log_dir        | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 pause       |                   |         |         |                     |                     |
	| pause   | nospam-715453 --log_dir        | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 pause       |                   |         |         |                     |                     |
	| pause   | nospam-715453 --log_dir        | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 pause       |                   |         |         |                     |                     |
	| unpause | nospam-715453 --log_dir        | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 unpause     |                   |         |         |                     |                     |
	| unpause | nospam-715453 --log_dir        | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 unpause     |                   |         |         |                     |                     |
	| unpause | nospam-715453 --log_dir        | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 unpause     |                   |         |         |                     |                     |
	| stop    | nospam-715453 --log_dir        | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 stop        |                   |         |         |                     |                     |
	| stop    | nospam-715453 --log_dir        | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 stop        |                   |         |         |                     |                     |
	| stop    | nospam-715453 --log_dir        | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 stop        |                   |         |         |                     |                     |
	| delete  | -p nospam-715453               | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	| start   | -p functional-546336           | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:54 UTC |
	|         | --memory=4000                  |                   |         |         |                     |                     |
	|         | --apiserver-port=8441          |                   |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                   |         |         |                     |                     |
	|         | --container-runtime=crio       |                   |         |         |                     |                     |
	| start   | -p functional-546336           | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 22:54 UTC |                     |
	|         | --alsologtostderr -v=8         |                   |         |         |                     |                     |
	|---------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/08 22:54:53
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 22:54:53.750429   21772 out.go:345] Setting OutFile to fd 1 ...
	I0408 22:54:53.750673   21772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 22:54:53.750778   21772 out.go:358] Setting ErrFile to fd 2...
	I0408 22:54:53.750790   21772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 22:54:53.751041   21772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20501-9125/.minikube/bin
	I0408 22:54:53.751600   21772 out.go:352] Setting JSON to false
	I0408 22:54:53.752542   21772 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2239,"bootTime":1744150655,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 22:54:53.752628   21772 start.go:139] virtualization: kvm guest
	I0408 22:54:53.754529   21772 out.go:177] * [functional-546336] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 22:54:53.755700   21772 out.go:177]   - MINIKUBE_LOCATION=20501
	I0408 22:54:53.755700   21772 notify.go:220] Checking for updates...
	I0408 22:54:53.757645   21772 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 22:54:53.758881   21772 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20501-9125/kubeconfig
	I0408 22:54:53.760110   21772 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20501-9125/.minikube
	I0408 22:54:53.761221   21772 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 22:54:53.762262   21772 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 22:54:53.764007   21772 config.go:182] Loaded profile config "functional-546336": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 22:54:53.764090   21772 driver.go:404] Setting default libvirt URI to qemu:///system
	I0408 22:54:53.764531   21772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:54:53.764591   21772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:54:53.780528   21772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37317
	I0408 22:54:53.780962   21772 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:54:53.781388   21772 main.go:141] libmachine: Using API Version  1
	I0408 22:54:53.781409   21772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:54:53.781752   21772 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:54:53.781914   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:54:53.819375   21772 out.go:177] * Using the kvm2 driver based on existing profile
	I0408 22:54:53.820528   21772 start.go:297] selected driver: kvm2
	I0408 22:54:53.820538   21772 start.go:901] validating driver "kvm2" against &{Name:functional-546336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-546336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 22:54:53.820619   21772 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 22:54:53.820910   21772 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 22:54:53.820988   21772 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20501-9125/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 22:54:53.835403   21772 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0408 22:54:53.836289   21772 cni.go:84] Creating CNI manager for ""
	I0408 22:54:53.836343   21772 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 22:54:53.836403   21772 start.go:340] cluster config:
	{Name:functional-546336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-546336 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 22:54:53.836507   21772 iso.go:125] acquiring lock: {Name:mk618477bad490b102618c53c9c8c6b34f33ce81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 22:54:53.838584   21772 out.go:177] * Starting "functional-546336" primary control-plane node in "functional-546336" cluster
	I0408 22:54:53.839517   21772 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0408 22:54:53.839549   21772 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20501-9125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0408 22:54:53.839557   21772 cache.go:56] Caching tarball of preloaded images
	I0408 22:54:53.839620   21772 preload.go:172] Found /home/jenkins/minikube-integration/20501-9125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 22:54:53.839629   21772 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0408 22:54:53.839708   21772 profile.go:143] Saving config to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/config.json ...
	I0408 22:54:53.839890   21772 start.go:360] acquireMachinesLock for functional-546336: {Name:mke7be7b51cfddf557a39ecf6493fff6a1168ec9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 22:54:53.839934   21772 start.go:364] duration metric: took 24.616µs to acquireMachinesLock for "functional-546336"
	I0408 22:54:53.839951   21772 start.go:96] Skipping create...Using existing machine configuration
	I0408 22:54:53.839957   21772 fix.go:54] fixHost starting: 
	I0408 22:54:53.840198   21772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:54:53.840227   21772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:54:53.853842   21772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36351
	I0408 22:54:53.854248   21772 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:54:53.854642   21772 main.go:141] libmachine: Using API Version  1
	I0408 22:54:53.854660   21772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:54:53.854972   21772 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:54:53.855161   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:54:53.855314   21772 main.go:141] libmachine: (functional-546336) Calling .GetState
	I0408 22:54:53.856978   21772 fix.go:112] recreateIfNeeded on functional-546336: state=Running err=<nil>
	W0408 22:54:53.856995   21772 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 22:54:53.858448   21772 out.go:177] * Updating the running kvm2 "functional-546336" VM ...
	I0408 22:54:53.859370   21772 machine.go:93] provisionDockerMachine start ...
	I0408 22:54:53.859389   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:54:53.859573   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:54:53.861808   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:53.862195   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:53.862223   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:53.862331   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:54:53.862495   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:53.862642   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:53.862769   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:54:53.862913   21772 main.go:141] libmachine: Using SSH client type: native
	I0408 22:54:53.863111   21772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0408 22:54:53.863123   21772 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 22:54:53.975743   21772 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-546336
	
	I0408 22:54:53.975774   21772 main.go:141] libmachine: (functional-546336) Calling .GetMachineName
	I0408 22:54:53.976060   21772 buildroot.go:166] provisioning hostname "functional-546336"
	I0408 22:54:53.976090   21772 main.go:141] libmachine: (functional-546336) Calling .GetMachineName
	I0408 22:54:53.976275   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:54:53.978794   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:53.979136   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:53.979155   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:53.979343   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:54:53.979538   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:53.979686   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:53.979818   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:54:53.979975   21772 main.go:141] libmachine: Using SSH client type: native
	I0408 22:54:53.980186   21772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0408 22:54:53.980207   21772 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-546336 && echo "functional-546336" | sudo tee /etc/hostname
	I0408 22:54:54.107226   21772 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-546336
	
	I0408 22:54:54.107256   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:54:54.110121   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.110402   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:54.110442   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.110575   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:54:54.110737   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:54.110870   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:54.110984   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:54:54.111111   21772 main.go:141] libmachine: Using SSH client type: native
	I0408 22:54:54.111332   21772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0408 22:54:54.111355   21772 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-546336' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-546336/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-546336' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 22:54:54.224292   21772 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 22:54:54.224321   21772 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20501-9125/.minikube CaCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20501-9125/.minikube}
	I0408 22:54:54.224341   21772 buildroot.go:174] setting up certificates
	I0408 22:54:54.224352   21772 provision.go:84] configureAuth start
	I0408 22:54:54.224363   21772 main.go:141] libmachine: (functional-546336) Calling .GetMachineName
	I0408 22:54:54.224632   21772 main.go:141] libmachine: (functional-546336) Calling .GetIP
	I0408 22:54:54.227055   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.227343   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:54.227372   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.227496   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:54:54.229707   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.230025   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:54.230063   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.230204   21772 provision.go:143] copyHostCerts
	I0408 22:54:54.230228   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem
	I0408 22:54:54.230253   21772 exec_runner.go:144] found /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem, removing ...
	I0408 22:54:54.230267   21772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem
	I0408 22:54:54.230331   21772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem (1082 bytes)
	I0408 22:54:54.230397   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem
	I0408 22:54:54.230414   21772 exec_runner.go:144] found /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem, removing ...
	I0408 22:54:54.230421   21772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem
	I0408 22:54:54.230442   21772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem (1123 bytes)
	I0408 22:54:54.230555   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem
	I0408 22:54:54.230580   21772 exec_runner.go:144] found /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem, removing ...
	I0408 22:54:54.230584   21772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem
	I0408 22:54:54.230614   21772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem (1675 bytes)
	I0408 22:54:54.230663   21772 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem org=jenkins.functional-546336 san=[127.0.0.1 192.168.39.234 functional-546336 localhost minikube]
	I0408 22:54:54.377433   21772 provision.go:177] copyRemoteCerts
	I0408 22:54:54.377494   21772 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 22:54:54.377516   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:54:54.379910   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.380186   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:54.380208   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.380353   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:54:54.380512   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:54.380651   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:54:54.380759   21772 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/functional-546336/id_rsa Username:docker}
	I0408 22:54:54.469346   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0408 22:54:54.469406   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 22:54:54.492119   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0408 22:54:54.492170   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0408 22:54:54.515795   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0408 22:54:54.515854   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0408 22:54:54.538157   21772 provision.go:87] duration metric: took 313.794377ms to configureAuth
	I0408 22:54:54.538179   21772 buildroot.go:189] setting minikube options for container-runtime
	I0408 22:54:54.538348   21772 config.go:182] Loaded profile config "functional-546336": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 22:54:54.538415   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:54:54.540893   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.541189   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:54.541211   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.541388   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:54:54.541569   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:54.541794   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:54.541956   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:54:54.542154   21772 main.go:141] libmachine: Using SSH client type: native
	I0408 22:54:54.542410   21772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0408 22:54:54.542429   21772 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 22:55:00.049143   21772 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 22:55:00.049177   21772 machine.go:96] duration metric: took 6.189793928s to provisionDockerMachine
	I0408 22:55:00.049193   21772 start.go:293] postStartSetup for "functional-546336" (driver="kvm2")
	I0408 22:55:00.049216   21772 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 22:55:00.049238   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:55:00.049527   21772 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 22:55:00.049554   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:55:00.052053   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.052329   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:55:00.052357   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.052449   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:55:00.052621   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:55:00.052774   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:55:00.052915   21772 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/functional-546336/id_rsa Username:docker}
	I0408 22:55:00.137252   21772 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 22:55:00.140999   21772 command_runner.go:130] > NAME=Buildroot
	I0408 22:55:00.141018   21772 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0408 22:55:00.141022   21772 command_runner.go:130] > ID=buildroot
	I0408 22:55:00.141034   21772 command_runner.go:130] > VERSION_ID=2023.02.9
	I0408 22:55:00.141041   21772 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0408 22:55:00.141078   21772 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 22:55:00.141091   21772 filesync.go:126] Scanning /home/jenkins/minikube-integration/20501-9125/.minikube/addons for local assets ...
	I0408 22:55:00.141153   21772 filesync.go:126] Scanning /home/jenkins/minikube-integration/20501-9125/.minikube/files for local assets ...
	I0408 22:55:00.141241   21772 filesync.go:149] local asset: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem -> 163142.pem in /etc/ssl/certs
	I0408 22:55:00.141253   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem -> /etc/ssl/certs/163142.pem
	I0408 22:55:00.141327   21772 filesync.go:149] local asset: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/test/nested/copy/16314/hosts -> hosts in /etc/test/nested/copy/16314
	I0408 22:55:00.141336   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/test/nested/copy/16314/hosts -> /etc/test/nested/copy/16314/hosts
	I0408 22:55:00.141386   21772 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/16314
	I0408 22:55:00.149913   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem --> /etc/ssl/certs/163142.pem (1708 bytes)
	I0408 22:55:00.172587   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/test/nested/copy/16314/hosts --> /etc/test/nested/copy/16314/hosts (40 bytes)
	I0408 22:55:00.194320   21772 start.go:296] duration metric: took 145.104306ms for postStartSetup
	I0408 22:55:00.194353   21772 fix.go:56] duration metric: took 6.354395244s for fixHost
	I0408 22:55:00.194371   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:55:00.197105   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.197468   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:55:00.197508   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.197619   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:55:00.197806   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:55:00.197977   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:55:00.198135   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:55:00.198315   21772 main.go:141] libmachine: Using SSH client type: native
	I0408 22:55:00.198518   21772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0408 22:55:00.198529   21772 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 22:55:00.312401   21772 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744152900.293880637
	
	I0408 22:55:00.312424   21772 fix.go:216] guest clock: 1744152900.293880637
	I0408 22:55:00.312432   21772 fix.go:229] Guest: 2025-04-08 22:55:00.293880637 +0000 UTC Remote: 2025-04-08 22:55:00.194356923 +0000 UTC m=+6.478226412 (delta=99.523714ms)
	I0408 22:55:00.312463   21772 fix.go:200] guest clock delta is within tolerance: 99.523714ms
	I0408 22:55:00.312469   21772 start.go:83] releasing machines lock for "functional-546336", held for 6.472524067s
	I0408 22:55:00.312490   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:55:00.312723   21772 main.go:141] libmachine: (functional-546336) Calling .GetIP
	I0408 22:55:00.315235   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.315592   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:55:00.315620   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.315756   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:55:00.316286   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:55:00.316432   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:55:00.316535   21772 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 22:55:00.316574   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:55:00.316683   21772 ssh_runner.go:195] Run: cat /version.json
	I0408 22:55:00.316708   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:55:00.319048   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.319325   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.319354   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:55:00.319371   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.319522   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:55:00.319696   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:55:00.319776   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:55:00.319817   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.319891   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:55:00.319984   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:55:00.320037   21772 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/functional-546336/id_rsa Username:docker}
	I0408 22:55:00.320121   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:55:00.320259   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:55:00.320368   21772 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/functional-546336/id_rsa Username:docker}
	I0408 22:55:00.470604   21772 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0408 22:55:00.470683   21772 command_runner.go:130] > {"iso_version": "v1.35.0", "kicbase_version": "v0.0.45-1736763277-20236", "minikube_version": "v1.35.0", "commit": "3fb24bd87c8c8761e2515e1a9ee13835a389ed68"}
	I0408 22:55:00.470819   21772 ssh_runner.go:195] Run: systemctl --version
	I0408 22:55:00.499552   21772 command_runner.go:130] > systemd 252 (252)
	I0408 22:55:00.499604   21772 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0408 22:55:00.500041   21772 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 22:55:00.827340   21772 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0408 22:55:00.834963   21772 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0408 22:55:00.835008   21772 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 22:55:00.835072   21772 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 22:55:00.877281   21772 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0408 22:55:00.877304   21772 start.go:495] detecting cgroup driver to use...
	I0408 22:55:00.877378   21772 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 22:55:00.940318   21772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 22:55:01.008191   21772 docker.go:217] disabling cri-docker service (if available) ...
	I0408 22:55:01.008253   21772 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 22:55:01.030120   21772 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 22:55:01.062576   21772 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 22:55:01.269983   21772 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 22:55:01.496425   21772 docker.go:233] disabling docker service ...
	I0408 22:55:01.496502   21772 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 22:55:01.519064   21772 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 22:55:01.540326   21772 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 22:55:01.741595   21772 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 22:55:01.913173   21772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 22:55:01.927297   21772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 22:55:01.950625   21772 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0408 22:55:01.951000   21772 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0408 22:55:01.951058   21772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:01.962726   21772 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 22:55:01.962790   21772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:01.974651   21772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:01.985351   21772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:01.996381   21772 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 22:55:02.012061   21772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:02.024694   21772 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:02.036195   21772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:02.045483   21772 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 22:55:02.053886   21772 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0408 22:55:02.053960   21772 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 22:55:02.066815   21772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 22:55:02.213651   21772 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 22:56:32.679193   21772 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.465496752s)
	I0408 22:56:32.679231   21772 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 22:56:32.679281   21772 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 22:56:32.684914   21772 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0408 22:56:32.684956   21772 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0408 22:56:32.684981   21772 command_runner.go:130] > Device: 0,22	Inode: 1501        Links: 1
	I0408 22:56:32.684990   21772 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0408 22:56:32.684996   21772 command_runner.go:130] > Access: 2025-04-08 22:56:32.503580497 +0000
	I0408 22:56:32.685001   21772 command_runner.go:130] > Modify: 2025-04-08 22:56:32.503580497 +0000
	I0408 22:56:32.685010   21772 command_runner.go:130] > Change: 2025-04-08 22:56:32.503580497 +0000
	I0408 22:56:32.685013   21772 command_runner.go:130] >  Birth: -
	I0408 22:56:32.685205   21772 start.go:563] Will wait 60s for crictl version
	I0408 22:56:32.685262   21772 ssh_runner.go:195] Run: which crictl
	I0408 22:56:32.688828   21772 command_runner.go:130] > /usr/bin/crictl
	I0408 22:56:32.688893   21772 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 22:56:32.724970   21772 command_runner.go:130] > Version:  0.1.0
	I0408 22:56:32.724989   21772 command_runner.go:130] > RuntimeName:  cri-o
	I0408 22:56:32.724994   21772 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0408 22:56:32.724998   21772 command_runner.go:130] > RuntimeApiVersion:  v1
	I0408 22:56:32.725893   21772 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 22:56:32.725977   21772 ssh_runner.go:195] Run: crio --version
	I0408 22:56:32.752723   21772 command_runner.go:130] > crio version 1.29.1
	I0408 22:56:32.752740   21772 command_runner.go:130] > Version:        1.29.1
	I0408 22:56:32.752746   21772 command_runner.go:130] > GitCommit:      unknown
	I0408 22:56:32.752750   21772 command_runner.go:130] > GitCommitDate:  unknown
	I0408 22:56:32.752754   21772 command_runner.go:130] > GitTreeState:   clean
	I0408 22:56:32.752759   21772 command_runner.go:130] > BuildDate:      2025-01-14T08:57:58Z
	I0408 22:56:32.752763   21772 command_runner.go:130] > GoVersion:      go1.21.6
	I0408 22:56:32.752767   21772 command_runner.go:130] > Compiler:       gc
	I0408 22:56:32.752771   21772 command_runner.go:130] > Platform:       linux/amd64
	I0408 22:56:32.752775   21772 command_runner.go:130] > Linkmode:       dynamic
	I0408 22:56:32.752779   21772 command_runner.go:130] > BuildTags:      
	I0408 22:56:32.752783   21772 command_runner.go:130] >   containers_image_ostree_stub
	I0408 22:56:32.752787   21772 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0408 22:56:32.752791   21772 command_runner.go:130] >   btrfs_noversion
	I0408 22:56:32.752795   21772 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0408 22:56:32.752800   21772 command_runner.go:130] >   libdm_no_deferred_remove
	I0408 22:56:32.752804   21772 command_runner.go:130] >   seccomp
	I0408 22:56:32.752810   21772 command_runner.go:130] > LDFlags:          unknown
	I0408 22:56:32.752814   21772 command_runner.go:130] > SeccompEnabled:   true
	I0408 22:56:32.752818   21772 command_runner.go:130] > AppArmorEnabled:  false
	I0408 22:56:32.753859   21772 ssh_runner.go:195] Run: crio --version
	I0408 22:56:32.778913   21772 command_runner.go:130] > crio version 1.29.1
	I0408 22:56:32.778948   21772 command_runner.go:130] > Version:        1.29.1
	I0408 22:56:32.778957   21772 command_runner.go:130] > GitCommit:      unknown
	I0408 22:56:32.778962   21772 command_runner.go:130] > GitCommitDate:  unknown
	I0408 22:56:32.778967   21772 command_runner.go:130] > GitTreeState:   clean
	I0408 22:56:32.778975   21772 command_runner.go:130] > BuildDate:      2025-01-14T08:57:58Z
	I0408 22:56:32.778980   21772 command_runner.go:130] > GoVersion:      go1.21.6
	I0408 22:56:32.778986   21772 command_runner.go:130] > Compiler:       gc
	I0408 22:56:32.778993   21772 command_runner.go:130] > Platform:       linux/amd64
	I0408 22:56:32.779002   21772 command_runner.go:130] > Linkmode:       dynamic
	I0408 22:56:32.779012   21772 command_runner.go:130] > BuildTags:      
	I0408 22:56:32.779020   21772 command_runner.go:130] >   containers_image_ostree_stub
	I0408 22:56:32.779030   21772 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0408 22:56:32.779037   21772 command_runner.go:130] >   btrfs_noversion
	I0408 22:56:32.779048   21772 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0408 22:56:32.779056   21772 command_runner.go:130] >   libdm_no_deferred_remove
	I0408 22:56:32.779064   21772 command_runner.go:130] >   seccomp
	I0408 22:56:32.779072   21772 command_runner.go:130] > LDFlags:          unknown
	I0408 22:56:32.779080   21772 command_runner.go:130] > SeccompEnabled:   true
	I0408 22:56:32.779090   21772 command_runner.go:130] > AppArmorEnabled:  false
	I0408 22:56:32.780946   21772 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0408 22:56:32.782109   21772 main.go:141] libmachine: (functional-546336) Calling .GetIP
	I0408 22:56:32.785040   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:56:32.785454   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:56:32.785486   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:56:32.785755   21772 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 22:56:32.789792   21772 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0408 22:56:32.790053   21772 kubeadm.go:883] updating cluster {Name:functional-546336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-5
46336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 22:56:32.790145   21772 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0408 22:56:32.790182   21772 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 22:56:32.827503   21772 command_runner.go:130] > {
	I0408 22:56:32.827524   21772 command_runner.go:130] >   "images": [
	I0408 22:56:32.827528   21772 command_runner.go:130] >     {
	I0408 22:56:32.827537   21772 command_runner.go:130] >       "id": "d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56",
	I0408 22:56:32.827541   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.827547   21772 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241212-9f82dd49"
	I0408 22:56:32.827550   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827554   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.827561   21772 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26",
	I0408 22:56:32.827568   21772 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40"
	I0408 22:56:32.827572   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827576   21772 command_runner.go:130] >       "size": "95714353",
	I0408 22:56:32.827579   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.827583   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.827593   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.827600   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.827603   21772 command_runner.go:130] >     },
	I0408 22:56:32.827606   21772 command_runner.go:130] >     {
	I0408 22:56:32.827611   21772 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0408 22:56:32.827614   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.827620   21772 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0408 22:56:32.827624   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827627   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.827635   21772 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0408 22:56:32.827645   21772 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0408 22:56:32.827649   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827657   21772 command_runner.go:130] >       "size": "31470524",
	I0408 22:56:32.827663   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.827667   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.827670   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.827674   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.827677   21772 command_runner.go:130] >     },
	I0408 22:56:32.827681   21772 command_runner.go:130] >     {
	I0408 22:56:32.827689   21772 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0408 22:56:32.827692   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.827697   21772 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0408 22:56:32.827703   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827706   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.827713   21772 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0408 22:56:32.827720   21772 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0408 22:56:32.827724   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827727   21772 command_runner.go:130] >       "size": "63273227",
	I0408 22:56:32.827731   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.827737   21772 command_runner.go:130] >       "username": "nonroot",
	I0408 22:56:32.827740   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.827754   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.827759   21772 command_runner.go:130] >     },
	I0408 22:56:32.827766   21772 command_runner.go:130] >     {
	I0408 22:56:32.827773   21772 command_runner.go:130] >       "id": "a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc",
	I0408 22:56:32.827777   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.827782   21772 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.16-0"
	I0408 22:56:32.827785   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827791   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.827798   21772 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990",
	I0408 22:56:32.827811   21772 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"
	I0408 22:56:32.827816   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827820   21772 command_runner.go:130] >       "size": "151021823",
	I0408 22:56:32.827824   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.827830   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.827833   21772 command_runner.go:130] >       },
	I0408 22:56:32.827837   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.827840   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.827844   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.827846   21772 command_runner.go:130] >     },
	I0408 22:56:32.827850   21772 command_runner.go:130] >     {
	I0408 22:56:32.827858   21772 command_runner.go:130] >       "id": "85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef",
	I0408 22:56:32.827874   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.827882   21772 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.32.2"
	I0408 22:56:32.827890   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827896   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.827908   21772 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d",
	I0408 22:56:32.827916   21772 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"
	I0408 22:56:32.827922   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827925   21772 command_runner.go:130] >       "size": "98055648",
	I0408 22:56:32.827929   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.827932   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.827936   21772 command_runner.go:130] >       },
	I0408 22:56:32.827949   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.827954   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.827958   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.827966   21772 command_runner.go:130] >     },
	I0408 22:56:32.827970   21772 command_runner.go:130] >     {
	I0408 22:56:32.827976   21772 command_runner.go:130] >       "id": "b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389",
	I0408 22:56:32.827982   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.827987   21772 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.32.2"
	I0408 22:56:32.827993   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827996   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.828003   21772 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5",
	I0408 22:56:32.828013   21772 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90"
	I0408 22:56:32.828019   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828022   21772 command_runner.go:130] >       "size": "90793286",
	I0408 22:56:32.828026   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.828029   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.828033   21772 command_runner.go:130] >       },
	I0408 22:56:32.828036   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.828043   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.828046   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.828049   21772 command_runner.go:130] >     },
	I0408 22:56:32.828052   21772 command_runner.go:130] >     {
	I0408 22:56:32.828058   21772 command_runner.go:130] >       "id": "f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5",
	I0408 22:56:32.828064   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.828069   21772 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.32.2"
	I0408 22:56:32.828074   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828078   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.828085   21772 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d",
	I0408 22:56:32.828094   21772 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d"
	I0408 22:56:32.828097   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828102   21772 command_runner.go:130] >       "size": "95271321",
	I0408 22:56:32.828108   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.828111   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.828115   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.828119   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.828124   21772 command_runner.go:130] >     },
	I0408 22:56:32.828131   21772 command_runner.go:130] >     {
	I0408 22:56:32.828140   21772 command_runner.go:130] >       "id": "d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d",
	I0408 22:56:32.828144   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.828150   21772 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.32.2"
	I0408 22:56:32.828159   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828165   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.828207   21772 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76",
	I0408 22:56:32.828220   21772 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b"
	I0408 22:56:32.828223   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828227   21772 command_runner.go:130] >       "size": "70653254",
	I0408 22:56:32.828230   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.828233   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.828236   21772 command_runner.go:130] >       },
	I0408 22:56:32.828239   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.828243   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.828247   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.828250   21772 command_runner.go:130] >     },
	I0408 22:56:32.828253   21772 command_runner.go:130] >     {
	I0408 22:56:32.828259   21772 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0408 22:56:32.828265   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.828269   21772 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0408 22:56:32.828272   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828276   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.828283   21772 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0408 22:56:32.828292   21772 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0408 22:56:32.828295   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828298   21772 command_runner.go:130] >       "size": "742080",
	I0408 22:56:32.828302   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.828305   21772 command_runner.go:130] >         "value": "65535"
	I0408 22:56:32.828308   21772 command_runner.go:130] >       },
	I0408 22:56:32.828312   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.828318   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.828324   21772 command_runner.go:130] >       "pinned": true
	I0408 22:56:32.828331   21772 command_runner.go:130] >     }
	I0408 22:56:32.828334   21772 command_runner.go:130] >   ]
	I0408 22:56:32.828337   21772 command_runner.go:130] > }
	I0408 22:56:32.829120   21772 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 22:56:32.829135   21772 crio.go:433] Images already preloaded, skipping extraction
	I0408 22:56:32.829174   21772 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 22:56:32.860598   21772 command_runner.go:130] > {
	I0408 22:56:32.860616   21772 command_runner.go:130] >   "images": [
	I0408 22:56:32.860620   21772 command_runner.go:130] >     {
	I0408 22:56:32.860628   21772 command_runner.go:130] >       "id": "d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56",
	I0408 22:56:32.860632   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.860637   21772 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241212-9f82dd49"
	I0408 22:56:32.860641   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860645   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.860658   21772 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26",
	I0408 22:56:32.860666   21772 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40"
	I0408 22:56:32.860669   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860674   21772 command_runner.go:130] >       "size": "95714353",
	I0408 22:56:32.860677   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.860682   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.860690   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.860694   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.860699   21772 command_runner.go:130] >     },
	I0408 22:56:32.860702   21772 command_runner.go:130] >     {
	I0408 22:56:32.860708   21772 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0408 22:56:32.860712   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.860719   21772 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0408 22:56:32.860722   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860727   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.860734   21772 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0408 22:56:32.860742   21772 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0408 22:56:32.860746   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860752   21772 command_runner.go:130] >       "size": "31470524",
	I0408 22:56:32.860757   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.860761   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.860764   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.860768   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.860771   21772 command_runner.go:130] >     },
	I0408 22:56:32.860774   21772 command_runner.go:130] >     {
	I0408 22:56:32.860780   21772 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0408 22:56:32.860784   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.860789   21772 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0408 22:56:32.860793   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860797   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.860805   21772 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0408 22:56:32.860814   21772 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0408 22:56:32.860818   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860828   21772 command_runner.go:130] >       "size": "63273227",
	I0408 22:56:32.860834   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.860838   21772 command_runner.go:130] >       "username": "nonroot",
	I0408 22:56:32.860842   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.860848   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.860851   21772 command_runner.go:130] >     },
	I0408 22:56:32.860854   21772 command_runner.go:130] >     {
	I0408 22:56:32.860860   21772 command_runner.go:130] >       "id": "a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc",
	I0408 22:56:32.860866   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.860871   21772 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.16-0"
	I0408 22:56:32.860878   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860882   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.860891   21772 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990",
	I0408 22:56:32.860905   21772 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"
	I0408 22:56:32.860911   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860915   21772 command_runner.go:130] >       "size": "151021823",
	I0408 22:56:32.860921   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.860925   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.860931   21772 command_runner.go:130] >       },
	I0408 22:56:32.860946   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.860953   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.860957   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.860962   21772 command_runner.go:130] >     },
	I0408 22:56:32.860965   21772 command_runner.go:130] >     {
	I0408 22:56:32.860971   21772 command_runner.go:130] >       "id": "85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef",
	I0408 22:56:32.860977   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.860982   21772 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.32.2"
	I0408 22:56:32.860985   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860990   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.860997   21772 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d",
	I0408 22:56:32.861007   21772 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"
	I0408 22:56:32.861010   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861014   21772 command_runner.go:130] >       "size": "98055648",
	I0408 22:56:32.861024   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.861030   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.861033   21772 command_runner.go:130] >       },
	I0408 22:56:32.861037   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.861043   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.861046   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.861049   21772 command_runner.go:130] >     },
	I0408 22:56:32.861052   21772 command_runner.go:130] >     {
	I0408 22:56:32.861060   21772 command_runner.go:130] >       "id": "b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389",
	I0408 22:56:32.861064   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.861071   21772 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.32.2"
	I0408 22:56:32.861076   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861082   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.861090   21772 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5",
	I0408 22:56:32.861099   21772 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90"
	I0408 22:56:32.861103   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861106   21772 command_runner.go:130] >       "size": "90793286",
	I0408 22:56:32.861110   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.861114   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.861116   21772 command_runner.go:130] >       },
	I0408 22:56:32.861120   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.861126   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.861130   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.861133   21772 command_runner.go:130] >     },
	I0408 22:56:32.861136   21772 command_runner.go:130] >     {
	I0408 22:56:32.861143   21772 command_runner.go:130] >       "id": "f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5",
	I0408 22:56:32.861149   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.861153   21772 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.32.2"
	I0408 22:56:32.861158   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861162   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.861169   21772 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d",
	I0408 22:56:32.861178   21772 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d"
	I0408 22:56:32.861182   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861190   21772 command_runner.go:130] >       "size": "95271321",
	I0408 22:56:32.861196   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.861200   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.861204   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.861207   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.861210   21772 command_runner.go:130] >     },
	I0408 22:56:32.861213   21772 command_runner.go:130] >     {
	I0408 22:56:32.861219   21772 command_runner.go:130] >       "id": "d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d",
	I0408 22:56:32.861224   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.861229   21772 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.32.2"
	I0408 22:56:32.861234   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861238   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.861256   21772 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76",
	I0408 22:56:32.861266   21772 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b"
	I0408 22:56:32.861269   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861273   21772 command_runner.go:130] >       "size": "70653254",
	I0408 22:56:32.861275   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.861279   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.861282   21772 command_runner.go:130] >       },
	I0408 22:56:32.861286   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.861289   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.861293   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.861296   21772 command_runner.go:130] >     },
	I0408 22:56:32.861299   21772 command_runner.go:130] >     {
	I0408 22:56:32.861305   21772 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0408 22:56:32.861314   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.861319   21772 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0408 22:56:32.861322   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861325   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.861332   21772 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0408 22:56:32.861341   21772 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0408 22:56:32.861345   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861349   21772 command_runner.go:130] >       "size": "742080",
	I0408 22:56:32.861357   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.861364   21772 command_runner.go:130] >         "value": "65535"
	I0408 22:56:32.861367   21772 command_runner.go:130] >       },
	I0408 22:56:32.861370   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.861374   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.861380   21772 command_runner.go:130] >       "pinned": true
	I0408 22:56:32.861382   21772 command_runner.go:130] >     }
	I0408 22:56:32.861385   21772 command_runner.go:130] >   ]
	I0408 22:56:32.861388   21772 command_runner.go:130] > }
	I0408 22:56:32.862015   21772 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 22:56:32.862029   21772 cache_images.go:84] Images are preloaded, skipping loading
	I0408 22:56:32.862035   21772 kubeadm.go:934] updating node { 192.168.39.234 8441 v1.32.2 crio true true} ...
	I0408 22:56:32.862119   21772 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-546336 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.234
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:functional-546336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 22:56:32.862176   21772 ssh_runner.go:195] Run: crio config
	I0408 22:56:32.900028   21772 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0408 22:56:32.900049   21772 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0408 22:56:32.900055   21772 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0408 22:56:32.900058   21772 command_runner.go:130] > #
	I0408 22:56:32.900065   21772 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0408 22:56:32.900071   21772 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0408 22:56:32.900077   21772 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0408 22:56:32.900097   21772 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0408 22:56:32.900101   21772 command_runner.go:130] > # reload'.
	I0408 22:56:32.900107   21772 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0408 22:56:32.900113   21772 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0408 22:56:32.900120   21772 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0408 22:56:32.900130   21772 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0408 22:56:32.900135   21772 command_runner.go:130] > [crio]
	I0408 22:56:32.900144   21772 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0408 22:56:32.900152   21772 command_runner.go:130] > # containers images, in this directory.
	I0408 22:56:32.900158   21772 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0408 22:56:32.900171   21772 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0408 22:56:32.900182   21772 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0408 22:56:32.900190   21772 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0408 22:56:32.900199   21772 command_runner.go:130] > # imagestore = ""
	I0408 22:56:32.900205   21772 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0408 22:56:32.900213   21772 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0408 22:56:32.900221   21772 command_runner.go:130] > storage_driver = "overlay"
	I0408 22:56:32.900232   21772 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0408 22:56:32.900240   21772 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0408 22:56:32.900247   21772 command_runner.go:130] > storage_option = [
	I0408 22:56:32.900262   21772 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0408 22:56:32.900275   21772 command_runner.go:130] > ]
	I0408 22:56:32.900286   21772 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0408 22:56:32.900296   21772 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0408 22:56:32.900301   21772 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0408 22:56:32.900307   21772 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0408 22:56:32.900312   21772 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0408 22:56:32.900316   21772 command_runner.go:130] > # always happen on a node reboot
	I0408 22:56:32.900323   21772 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0408 22:56:32.900351   21772 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0408 22:56:32.900362   21772 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0408 22:56:32.900370   21772 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0408 22:56:32.900379   21772 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0408 22:56:32.900389   21772 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0408 22:56:32.900401   21772 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0408 22:56:32.900408   21772 command_runner.go:130] > # internal_wipe = true
	I0408 22:56:32.900421   21772 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0408 22:56:32.900433   21772 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0408 22:56:32.900445   21772 command_runner.go:130] > # internal_repair = false
	I0408 22:56:32.900456   21772 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0408 22:56:32.900465   21772 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0408 22:56:32.900477   21772 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0408 22:56:32.900488   21772 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0408 22:56:32.900500   21772 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0408 22:56:32.900506   21772 command_runner.go:130] > [crio.api]
	I0408 22:56:32.900514   21772 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0408 22:56:32.900524   21772 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0408 22:56:32.900532   21772 command_runner.go:130] > # IP address on which the stream server will listen.
	I0408 22:56:32.900539   21772 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0408 22:56:32.900549   21772 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0408 22:56:32.900559   21772 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0408 22:56:32.900565   21772 command_runner.go:130] > # stream_port = "0"
	I0408 22:56:32.900572   21772 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0408 22:56:32.900581   21772 command_runner.go:130] > # stream_enable_tls = false
	I0408 22:56:32.900589   21772 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0408 22:56:32.900593   21772 command_runner.go:130] > # stream_idle_timeout = ""
	I0408 22:56:32.900601   21772 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0408 22:56:32.900607   21772 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0408 22:56:32.900614   21772 command_runner.go:130] > # minutes.
	I0408 22:56:32.900620   21772 command_runner.go:130] > # stream_tls_cert = ""
	I0408 22:56:32.900631   21772 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0408 22:56:32.900649   21772 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0408 22:56:32.900658   21772 command_runner.go:130] > # stream_tls_key = ""
	I0408 22:56:32.900667   21772 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0408 22:56:32.900679   21772 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0408 22:56:32.900709   21772 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0408 22:56:32.900720   21772 command_runner.go:130] > # stream_tls_ca = ""
	I0408 22:56:32.900732   21772 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0408 22:56:32.900742   21772 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0408 22:56:32.900753   21772 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0408 22:56:32.900763   21772 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0408 22:56:32.900773   21772 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0408 22:56:32.900785   21772 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0408 22:56:32.900793   21772 command_runner.go:130] > [crio.runtime]
	I0408 22:56:32.900803   21772 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0408 22:56:32.900815   21772 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0408 22:56:32.900822   21772 command_runner.go:130] > # "nofile=1024:2048"
	I0408 22:56:32.900832   21772 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0408 22:56:32.900841   21772 command_runner.go:130] > # default_ulimits = [
	I0408 22:56:32.900847   21772 command_runner.go:130] > # ]
	I0408 22:56:32.900860   21772 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0408 22:56:32.900873   21772 command_runner.go:130] > # no_pivot = false
	I0408 22:56:32.900885   21772 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0408 22:56:32.900897   21772 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0408 22:56:32.900907   21772 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0408 22:56:32.900918   21772 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0408 22:56:32.900932   21772 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0408 22:56:32.900959   21772 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0408 22:56:32.900970   21772 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0408 22:56:32.900976   21772 command_runner.go:130] > # Cgroup setting for conmon
	I0408 22:56:32.900987   21772 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0408 22:56:32.900996   21772 command_runner.go:130] > conmon_cgroup = "pod"
	I0408 22:56:32.901006   21772 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0408 22:56:32.901017   21772 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0408 22:56:32.901030   21772 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0408 22:56:32.901038   21772 command_runner.go:130] > conmon_env = [
	I0408 22:56:32.901047   21772 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0408 22:56:32.901055   21772 command_runner.go:130] > ]
	I0408 22:56:32.901064   21772 command_runner.go:130] > # Additional environment variables to set for all the
	I0408 22:56:32.901075   21772 command_runner.go:130] > # containers. These are overridden if set in the
	I0408 22:56:32.901087   21772 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0408 22:56:32.901094   21772 command_runner.go:130] > # default_env = [
	I0408 22:56:32.901103   21772 command_runner.go:130] > # ]
	I0408 22:56:32.901111   21772 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0408 22:56:32.901125   21772 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0408 22:56:32.901134   21772 command_runner.go:130] > # selinux = false
	I0408 22:56:32.901143   21772 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0408 22:56:32.901155   21772 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0408 22:56:32.901167   21772 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0408 22:56:32.901177   21772 command_runner.go:130] > # seccomp_profile = ""
	I0408 22:56:32.901186   21772 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0408 22:56:32.901197   21772 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0408 22:56:32.901207   21772 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0408 22:56:32.901217   21772 command_runner.go:130] > # which might increase security.
	I0408 22:56:32.901225   21772 command_runner.go:130] > # This option is currently deprecated,
	I0408 22:56:32.901237   21772 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0408 22:56:32.901255   21772 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0408 22:56:32.901268   21772 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0408 22:56:32.901288   21772 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0408 22:56:32.901314   21772 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0408 22:56:32.901327   21772 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0408 22:56:32.901335   21772 command_runner.go:130] > # This option supports live configuration reload.
	I0408 22:56:32.901345   21772 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0408 22:56:32.901353   21772 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0408 22:56:32.901362   21772 command_runner.go:130] > # the cgroup blockio controller.
	I0408 22:56:32.901369   21772 command_runner.go:130] > # blockio_config_file = ""
	I0408 22:56:32.901382   21772 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0408 22:56:32.901388   21772 command_runner.go:130] > # blockio parameters.
	I0408 22:56:32.901397   21772 command_runner.go:130] > # blockio_reload = false
	I0408 22:56:32.901407   21772 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0408 22:56:32.901414   21772 command_runner.go:130] > # irqbalance daemon.
	I0408 22:56:32.901419   21772 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0408 22:56:32.901425   21772 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0408 22:56:32.901431   21772 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0408 22:56:32.901438   21772 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0408 22:56:32.901446   21772 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0408 22:56:32.901454   21772 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0408 22:56:32.901461   21772 command_runner.go:130] > # This option supports live configuration reload.
	I0408 22:56:32.901468   21772 command_runner.go:130] > # rdt_config_file = ""
	I0408 22:56:32.901476   21772 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0408 22:56:32.901483   21772 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0408 22:56:32.901522   21772 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0408 22:56:32.901531   21772 command_runner.go:130] > # separate_pull_cgroup = ""
	I0408 22:56:32.901538   21772 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0408 22:56:32.901549   21772 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0408 22:56:32.901555   21772 command_runner.go:130] > # will be added.
	I0408 22:56:32.901562   21772 command_runner.go:130] > # default_capabilities = [
	I0408 22:56:32.901571   21772 command_runner.go:130] > # 	"CHOWN",
	I0408 22:56:32.901577   21772 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0408 22:56:32.901585   21772 command_runner.go:130] > # 	"FSETID",
	I0408 22:56:32.901590   21772 command_runner.go:130] > # 	"FOWNER",
	I0408 22:56:32.901596   21772 command_runner.go:130] > # 	"SETGID",
	I0408 22:56:32.901609   21772 command_runner.go:130] > # 	"SETUID",
	I0408 22:56:32.901618   21772 command_runner.go:130] > # 	"SETPCAP",
	I0408 22:56:32.901622   21772 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0408 22:56:32.901628   21772 command_runner.go:130] > # 	"KILL",
	I0408 22:56:32.901632   21772 command_runner.go:130] > # ]
	I0408 22:56:32.901643   21772 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0408 22:56:32.901657   21772 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0408 22:56:32.901671   21772 command_runner.go:130] > # add_inheritable_capabilities = false
	I0408 22:56:32.901681   21772 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0408 22:56:32.901693   21772 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0408 22:56:32.901702   21772 command_runner.go:130] > default_sysctls = [
	I0408 22:56:32.901710   21772 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0408 22:56:32.901718   21772 command_runner.go:130] > ]
	I0408 22:56:32.901725   21772 command_runner.go:130] > # List of devices on the host that a
	I0408 22:56:32.901738   21772 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0408 22:56:32.901744   21772 command_runner.go:130] > # allowed_devices = [
	I0408 22:56:32.901753   21772 command_runner.go:130] > # 	"/dev/fuse",
	I0408 22:56:32.901759   21772 command_runner.go:130] > # ]
	I0408 22:56:32.901768   21772 command_runner.go:130] > # List of additional devices. specified as
	I0408 22:56:32.901782   21772 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0408 22:56:32.901793   21772 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0408 22:56:32.901802   21772 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0408 22:56:32.901811   21772 command_runner.go:130] > # additional_devices = [
	I0408 22:56:32.901816   21772 command_runner.go:130] > # ]
	I0408 22:56:32.901827   21772 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0408 22:56:32.901834   21772 command_runner.go:130] > # cdi_spec_dirs = [
	I0408 22:56:32.901842   21772 command_runner.go:130] > # 	"/etc/cdi",
	I0408 22:56:32.901848   21772 command_runner.go:130] > # 	"/var/run/cdi",
	I0408 22:56:32.901856   21772 command_runner.go:130] > # ]
	I0408 22:56:32.901866   21772 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0408 22:56:32.901878   21772 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0408 22:56:32.901885   21772 command_runner.go:130] > # Defaults to false.
	I0408 22:56:32.901891   21772 command_runner.go:130] > # device_ownership_from_security_context = false
	I0408 22:56:32.901909   21772 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0408 22:56:32.901922   21772 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0408 22:56:32.901928   21772 command_runner.go:130] > # hooks_dir = [
	I0408 22:56:32.901936   21772 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0408 22:56:32.901950   21772 command_runner.go:130] > # ]
	I0408 22:56:32.901959   21772 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0408 22:56:32.901970   21772 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0408 22:56:32.901979   21772 command_runner.go:130] > # its default mounts from the following two files:
	I0408 22:56:32.901990   21772 command_runner.go:130] > #
	I0408 22:56:32.902004   21772 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0408 22:56:32.902015   21772 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0408 22:56:32.902024   21772 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0408 22:56:32.902033   21772 command_runner.go:130] > #
	I0408 22:56:32.902042   21772 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0408 22:56:32.902054   21772 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0408 22:56:32.902067   21772 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0408 22:56:32.902078   21772 command_runner.go:130] > #      only add mounts it finds in this file.
	I0408 22:56:32.902083   21772 command_runner.go:130] > #
	I0408 22:56:32.902092   21772 command_runner.go:130] > # default_mounts_file = ""
	I0408 22:56:32.902103   21772 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0408 22:56:32.902115   21772 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0408 22:56:32.902125   21772 command_runner.go:130] > pids_limit = 1024
	I0408 22:56:32.902135   21772 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0408 22:56:32.902144   21772 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0408 22:56:32.902151   21772 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0408 22:56:32.902166   21772 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0408 22:56:32.902177   21772 command_runner.go:130] > # log_size_max = -1
	I0408 22:56:32.902187   21772 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0408 22:56:32.902194   21772 command_runner.go:130] > # log_to_journald = false
	I0408 22:56:32.902206   21772 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0408 22:56:32.902216   21772 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0408 22:56:32.902224   21772 command_runner.go:130] > # Path to directory for container attach sockets.
	I0408 22:56:32.902234   21772 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0408 22:56:32.902254   21772 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0408 22:56:32.902264   21772 command_runner.go:130] > # bind_mount_prefix = ""
	I0408 22:56:32.902272   21772 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0408 22:56:32.902281   21772 command_runner.go:130] > # read_only = false
	I0408 22:56:32.902290   21772 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0408 22:56:32.902303   21772 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0408 22:56:32.902311   21772 command_runner.go:130] > # live configuration reload.
	I0408 22:56:32.902315   21772 command_runner.go:130] > # log_level = "info"
	I0408 22:56:32.902325   21772 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0408 22:56:32.902334   21772 command_runner.go:130] > # This option supports live configuration reload.
	I0408 22:56:32.902343   21772 command_runner.go:130] > # log_filter = ""
	I0408 22:56:32.902352   21772 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0408 22:56:32.902366   21772 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0408 22:56:32.902373   21772 command_runner.go:130] > # separated by comma.
	I0408 22:56:32.902387   21772 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0408 22:56:32.902396   21772 command_runner.go:130] > # uid_mappings = ""
	I0408 22:56:32.902405   21772 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0408 22:56:32.902417   21772 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0408 22:56:32.902427   21772 command_runner.go:130] > # separated by comma.
	I0408 22:56:32.902442   21772 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0408 22:56:32.902450   21772 command_runner.go:130] > # gid_mappings = ""
	I0408 22:56:32.902459   21772 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0408 22:56:32.902472   21772 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0408 22:56:32.902481   21772 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0408 22:56:32.902489   21772 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0408 22:56:32.902499   21772 command_runner.go:130] > # minimum_mappable_uid = -1
	I0408 22:56:32.902508   21772 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0408 22:56:32.902521   21772 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0408 22:56:32.902533   21772 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0408 22:56:32.902545   21772 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0408 22:56:32.902554   21772 command_runner.go:130] > # minimum_mappable_gid = -1
	I0408 22:56:32.902563   21772 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0408 22:56:32.902571   21772 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0408 22:56:32.902584   21772 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0408 22:56:32.902595   21772 command_runner.go:130] > # ctr_stop_timeout = 30
	I0408 22:56:32.902608   21772 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0408 22:56:32.902619   21772 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0408 22:56:32.902629   21772 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0408 22:56:32.902637   21772 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0408 22:56:32.902646   21772 command_runner.go:130] > drop_infra_ctr = false
	I0408 22:56:32.902653   21772 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0408 22:56:32.902661   21772 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0408 22:56:32.902672   21772 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0408 22:56:32.902683   21772 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0408 22:56:32.902696   21772 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0408 22:56:32.902708   21772 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0408 22:56:32.902719   21772 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0408 22:56:32.902730   21772 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0408 22:56:32.902735   21772 command_runner.go:130] > # shared_cpuset = ""
	I0408 22:56:32.902740   21772 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0408 22:56:32.902747   21772 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0408 22:56:32.902753   21772 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0408 22:56:32.902767   21772 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0408 22:56:32.902777   21772 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0408 22:56:32.902789   21772 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0408 22:56:32.902801   21772 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0408 22:56:32.902811   21772 command_runner.go:130] > # enable_criu_support = false
	I0408 22:56:32.902820   21772 command_runner.go:130] > # Enable/disable the generation of the container,
	I0408 22:56:32.902826   21772 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0408 22:56:32.902834   21772 command_runner.go:130] > # enable_pod_events = false
	I0408 22:56:32.902844   21772 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0408 22:56:32.902857   21772 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0408 22:56:32.902867   21772 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0408 22:56:32.902873   21772 command_runner.go:130] > # default_runtime = "runc"
	I0408 22:56:32.902884   21772 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0408 22:56:32.902897   21772 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0408 22:56:32.902917   21772 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0408 22:56:32.902928   21772 command_runner.go:130] > # creation as a file is not desired either.
	I0408 22:56:32.902945   21772 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0408 22:56:32.902956   21772 command_runner.go:130] > # the hostname is being managed dynamically.
	I0408 22:56:32.902962   21772 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0408 22:56:32.902970   21772 command_runner.go:130] > # ]
	I0408 22:56:32.902983   21772 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0408 22:56:32.902993   21772 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0408 22:56:32.903002   21772 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0408 22:56:32.903013   21772 command_runner.go:130] > # Each entry in the table should follow the format:
	I0408 22:56:32.903022   21772 command_runner.go:130] > #
	I0408 22:56:32.903029   21772 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0408 22:56:32.903039   21772 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0408 22:56:32.903114   21772 command_runner.go:130] > # runtime_type = "oci"
	I0408 22:56:32.903129   21772 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0408 22:56:32.903136   21772 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0408 22:56:32.903142   21772 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0408 22:56:32.903150   21772 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0408 22:56:32.903156   21772 command_runner.go:130] > # monitor_env = []
	I0408 22:56:32.903164   21772 command_runner.go:130] > # privileged_without_host_devices = false
	I0408 22:56:32.903171   21772 command_runner.go:130] > # allowed_annotations = []
	I0408 22:56:32.903177   21772 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0408 22:56:32.903186   21772 command_runner.go:130] > # Where:
	I0408 22:56:32.903195   21772 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0408 22:56:32.903207   21772 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0408 22:56:32.903220   21772 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0408 22:56:32.903235   21772 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0408 22:56:32.903243   21772 command_runner.go:130] > #   in $PATH.
	I0408 22:56:32.903253   21772 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0408 22:56:32.903260   21772 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0408 22:56:32.903267   21772 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0408 22:56:32.903275   21772 command_runner.go:130] > #   state.
	I0408 22:56:32.903291   21772 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0408 22:56:32.903308   21772 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0408 22:56:32.903321   21772 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0408 22:56:32.903329   21772 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0408 22:56:32.903340   21772 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0408 22:56:32.903348   21772 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0408 22:56:32.903355   21772 command_runner.go:130] > #   The currently recognized values are:
	I0408 22:56:32.903368   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0408 22:56:32.903382   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0408 22:56:32.903394   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0408 22:56:32.903404   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0408 22:56:32.903418   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0408 22:56:32.903429   21772 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0408 22:56:32.903443   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0408 22:56:32.903456   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0408 22:56:32.903467   21772 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0408 22:56:32.903479   21772 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0408 22:56:32.903489   21772 command_runner.go:130] > #   deprecated option "conmon".
	I0408 22:56:32.903501   21772 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0408 22:56:32.903513   21772 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0408 22:56:32.903527   21772 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0408 22:56:32.903538   21772 command_runner.go:130] > #   should be moved to the container's cgroup
	I0408 22:56:32.903548   21772 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0408 22:56:32.903557   21772 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0408 22:56:32.903568   21772 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0408 22:56:32.903577   21772 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0408 22:56:32.903580   21772 command_runner.go:130] > #
	I0408 22:56:32.903588   21772 command_runner.go:130] > # Using the seccomp notifier feature:
	I0408 22:56:32.903595   21772 command_runner.go:130] > #
	I0408 22:56:32.903604   21772 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0408 22:56:32.903618   21772 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0408 22:56:32.903622   21772 command_runner.go:130] > #
	I0408 22:56:32.903632   21772 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0408 22:56:32.903644   21772 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0408 22:56:32.903657   21772 command_runner.go:130] > #
	I0408 22:56:32.903669   21772 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0408 22:56:32.903678   21772 command_runner.go:130] > # feature.
	I0408 22:56:32.903682   21772 command_runner.go:130] > #
	I0408 22:56:32.903694   21772 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0408 22:56:32.903706   21772 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0408 22:56:32.903718   21772 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0408 22:56:32.903728   21772 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0408 22:56:32.903739   21772 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0408 22:56:32.903747   21772 command_runner.go:130] > #
	I0408 22:56:32.903756   21772 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0408 22:56:32.903766   21772 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0408 22:56:32.903769   21772 command_runner.go:130] > #
	I0408 22:56:32.903777   21772 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0408 22:56:32.903789   21772 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0408 22:56:32.903797   21772 command_runner.go:130] > #
	I0408 22:56:32.903805   21772 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0408 22:56:32.903816   21772 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0408 22:56:32.903828   21772 command_runner.go:130] > # limitation.
	I0408 22:56:32.903839   21772 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0408 22:56:32.903846   21772 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0408 22:56:32.903850   21772 command_runner.go:130] > runtime_type = "oci"
	I0408 22:56:32.903854   21772 command_runner.go:130] > runtime_root = "/run/runc"
	I0408 22:56:32.903860   21772 command_runner.go:130] > runtime_config_path = ""
	I0408 22:56:32.903881   21772 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0408 22:56:32.903890   21772 command_runner.go:130] > monitor_cgroup = "pod"
	I0408 22:56:32.903896   21772 command_runner.go:130] > monitor_exec_cgroup = ""
	I0408 22:56:32.903905   21772 command_runner.go:130] > monitor_env = [
	I0408 22:56:32.903914   21772 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0408 22:56:32.903922   21772 command_runner.go:130] > ]
	I0408 22:56:32.903929   21772 command_runner.go:130] > privileged_without_host_devices = false
	I0408 22:56:32.903943   21772 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0408 22:56:32.903954   21772 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0408 22:56:32.903974   21772 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0408 22:56:32.903992   21772 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0408 22:56:32.904007   21772 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0408 22:56:32.904018   21772 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0408 22:56:32.904031   21772 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0408 22:56:32.904046   21772 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0408 22:56:32.904059   21772 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0408 22:56:32.904070   21772 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0408 22:56:32.904078   21772 command_runner.go:130] > # Example:
	I0408 22:56:32.904085   21772 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0408 22:56:32.904096   21772 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0408 22:56:32.904104   21772 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0408 22:56:32.904109   21772 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0408 22:56:32.904116   21772 command_runner.go:130] > # cpuset = 0
	I0408 22:56:32.904122   21772 command_runner.go:130] > # cpushares = "0-1"
	I0408 22:56:32.904131   21772 command_runner.go:130] > # Where:
	I0408 22:56:32.904138   21772 command_runner.go:130] > # The workload name is workload-type.
	I0408 22:56:32.904151   21772 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0408 22:56:32.904162   21772 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0408 22:56:32.904171   21772 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0408 22:56:32.904185   21772 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0408 22:56:32.904195   21772 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0408 22:56:32.904202   21772 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0408 22:56:32.904216   21772 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0408 22:56:32.904226   21772 command_runner.go:130] > # Default value is set to true
	I0408 22:56:32.904232   21772 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0408 22:56:32.904244   21772 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0408 22:56:32.904253   21772 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0408 22:56:32.904260   21772 command_runner.go:130] > # Default value is set to 'false'
	I0408 22:56:32.904267   21772 command_runner.go:130] > # disable_hostport_mapping = false
	I0408 22:56:32.904275   21772 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0408 22:56:32.904280   21772 command_runner.go:130] > #
	I0408 22:56:32.904288   21772 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0408 22:56:32.904307   21772 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0408 22:56:32.904322   21772 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0408 22:56:32.904335   21772 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0408 22:56:32.904349   21772 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0408 22:56:32.904357   21772 command_runner.go:130] > [crio.image]
	I0408 22:56:32.904363   21772 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0408 22:56:32.904371   21772 command_runner.go:130] > # default_transport = "docker://"
	I0408 22:56:32.904382   21772 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0408 22:56:32.904394   21772 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0408 22:56:32.904404   21772 command_runner.go:130] > # global_auth_file = ""
	I0408 22:56:32.904411   21772 command_runner.go:130] > # The image used to instantiate infra containers.
	I0408 22:56:32.904421   21772 command_runner.go:130] > # This option supports live configuration reload.
	I0408 22:56:32.904431   21772 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0408 22:56:32.904441   21772 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0408 22:56:32.904449   21772 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0408 22:56:32.904454   21772 command_runner.go:130] > # This option supports live configuration reload.
	I0408 22:56:32.904459   21772 command_runner.go:130] > # pause_image_auth_file = ""
	I0408 22:56:32.904464   21772 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0408 22:56:32.904472   21772 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0408 22:56:32.904481   21772 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0408 22:56:32.904494   21772 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0408 22:56:32.904503   21772 command_runner.go:130] > # pause_command = "/pause"
	I0408 22:56:32.904511   21772 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0408 22:56:32.904551   21772 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0408 22:56:32.904556   21772 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0408 22:56:32.904564   21772 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0408 22:56:32.904569   21772 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0408 22:56:32.904578   21772 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0408 22:56:32.904584   21772 command_runner.go:130] > # pinned_images = [
	I0408 22:56:32.904592   21772 command_runner.go:130] > # ]
	I0408 22:56:32.904600   21772 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0408 22:56:32.904607   21772 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0408 22:56:32.904615   21772 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0408 22:56:32.904629   21772 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0408 22:56:32.904642   21772 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0408 22:56:32.904651   21772 command_runner.go:130] > # signature_policy = ""
	I0408 22:56:32.904660   21772 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0408 22:56:32.904672   21772 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0408 22:56:32.904681   21772 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0408 22:56:32.904694   21772 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0408 22:56:32.904702   21772 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0408 22:56:32.904707   21772 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0408 22:56:32.904714   21772 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0408 22:56:32.904720   21772 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0408 22:56:32.904723   21772 command_runner.go:130] > # changing them here.
	I0408 22:56:32.904726   21772 command_runner.go:130] > # insecure_registries = [
	I0408 22:56:32.904729   21772 command_runner.go:130] > # ]
	I0408 22:56:32.904735   21772 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0408 22:56:32.904739   21772 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0408 22:56:32.904743   21772 command_runner.go:130] > # image_volumes = "mkdir"
	I0408 22:56:32.904747   21772 command_runner.go:130] > # Temporary directory to use for storing big files
	I0408 22:56:32.904751   21772 command_runner.go:130] > # big_files_temporary_dir = ""
	I0408 22:56:32.904756   21772 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0408 22:56:32.904760   21772 command_runner.go:130] > # CNI plugins.
	I0408 22:56:32.904763   21772 command_runner.go:130] > [crio.network]
	I0408 22:56:32.904768   21772 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0408 22:56:32.904773   21772 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0408 22:56:32.904777   21772 command_runner.go:130] > # cni_default_network = ""
	I0408 22:56:32.904782   21772 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0408 22:56:32.904786   21772 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0408 22:56:32.904791   21772 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0408 22:56:32.904794   21772 command_runner.go:130] > # plugin_dirs = [
	I0408 22:56:32.904798   21772 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0408 22:56:32.904800   21772 command_runner.go:130] > # ]
	I0408 22:56:32.904805   21772 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0408 22:56:32.904809   21772 command_runner.go:130] > [crio.metrics]
	I0408 22:56:32.904818   21772 command_runner.go:130] > # Globally enable or disable metrics support.
	I0408 22:56:32.904821   21772 command_runner.go:130] > enable_metrics = true
	I0408 22:56:32.904825   21772 command_runner.go:130] > # Specify enabled metrics collectors.
	I0408 22:56:32.904829   21772 command_runner.go:130] > # Per default all metrics are enabled.
	I0408 22:56:32.904834   21772 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0408 22:56:32.904840   21772 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0408 22:56:32.904847   21772 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0408 22:56:32.904853   21772 command_runner.go:130] > # metrics_collectors = [
	I0408 22:56:32.904859   21772 command_runner.go:130] > # 	"operations",
	I0408 22:56:32.904866   21772 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0408 22:56:32.904871   21772 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0408 22:56:32.904875   21772 command_runner.go:130] > # 	"operations_errors",
	I0408 22:56:32.904879   21772 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0408 22:56:32.904882   21772 command_runner.go:130] > # 	"image_pulls_by_name",
	I0408 22:56:32.904888   21772 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0408 22:56:32.904892   21772 command_runner.go:130] > # 	"image_pulls_failures",
	I0408 22:56:32.904895   21772 command_runner.go:130] > # 	"image_pulls_successes",
	I0408 22:56:32.904899   21772 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0408 22:56:32.904903   21772 command_runner.go:130] > # 	"image_layer_reuse",
	I0408 22:56:32.904907   21772 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0408 22:56:32.904911   21772 command_runner.go:130] > # 	"containers_oom_total",
	I0408 22:56:32.904915   21772 command_runner.go:130] > # 	"containers_oom",
	I0408 22:56:32.904918   21772 command_runner.go:130] > # 	"processes_defunct",
	I0408 22:56:32.904922   21772 command_runner.go:130] > # 	"operations_total",
	I0408 22:56:32.904929   21772 command_runner.go:130] > # 	"operations_latency_seconds",
	I0408 22:56:32.904933   21772 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0408 22:56:32.904937   21772 command_runner.go:130] > # 	"operations_errors_total",
	I0408 22:56:32.904947   21772 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0408 22:56:32.904955   21772 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0408 22:56:32.904959   21772 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0408 22:56:32.904963   21772 command_runner.go:130] > # 	"image_pulls_success_total",
	I0408 22:56:32.904967   21772 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0408 22:56:32.904971   21772 command_runner.go:130] > # 	"containers_oom_count_total",
	I0408 22:56:32.904981   21772 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0408 22:56:32.904988   21772 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0408 22:56:32.904991   21772 command_runner.go:130] > # ]
	I0408 22:56:32.905000   21772 command_runner.go:130] > # The port on which the metrics server will listen.
	I0408 22:56:32.905006   21772 command_runner.go:130] > # metrics_port = 9090
	I0408 22:56:32.905011   21772 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0408 22:56:32.905014   21772 command_runner.go:130] > # metrics_socket = ""
	I0408 22:56:32.905019   21772 command_runner.go:130] > # The certificate for the secure metrics server.
	I0408 22:56:32.905024   21772 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0408 22:56:32.905033   21772 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0408 22:56:32.905037   21772 command_runner.go:130] > # certificate on any modification event.
	I0408 22:56:32.905043   21772 command_runner.go:130] > # metrics_cert = ""
	I0408 22:56:32.905048   21772 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0408 22:56:32.905052   21772 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0408 22:56:32.905058   21772 command_runner.go:130] > # metrics_key = ""
	I0408 22:56:32.905064   21772 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0408 22:56:32.905070   21772 command_runner.go:130] > [crio.tracing]
	I0408 22:56:32.905075   21772 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0408 22:56:32.905079   21772 command_runner.go:130] > # enable_tracing = false
	I0408 22:56:32.905087   21772 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0408 22:56:32.905091   21772 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0408 22:56:32.905097   21772 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0408 22:56:32.905104   21772 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0408 22:56:32.905108   21772 command_runner.go:130] > # CRI-O NRI configuration.
	I0408 22:56:32.905113   21772 command_runner.go:130] > [crio.nri]
	I0408 22:56:32.905117   21772 command_runner.go:130] > # Globally enable or disable NRI.
	I0408 22:56:32.905125   21772 command_runner.go:130] > # enable_nri = false
	I0408 22:56:32.905129   21772 command_runner.go:130] > # NRI socket to listen on.
	I0408 22:56:32.905136   21772 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0408 22:56:32.905139   21772 command_runner.go:130] > # NRI plugin directory to use.
	I0408 22:56:32.905144   21772 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0408 22:56:32.905148   21772 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0408 22:56:32.905155   21772 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0408 22:56:32.905164   21772 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0408 22:56:32.905171   21772 command_runner.go:130] > # nri_disable_connections = false
	I0408 22:56:32.905175   21772 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0408 22:56:32.905182   21772 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0408 22:56:32.905186   21772 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0408 22:56:32.905193   21772 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0408 22:56:32.905199   21772 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0408 22:56:32.905204   21772 command_runner.go:130] > [crio.stats]
	I0408 22:56:32.905210   21772 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0408 22:56:32.905217   21772 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0408 22:56:32.905223   21772 command_runner.go:130] > # stats_collection_period = 0
	I0408 22:56:32.905256   21772 command_runner.go:130] ! time="2025-04-08 22:56:32.868436253Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0408 22:56:32.905274   21772 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0408 22:56:32.905342   21772 cni.go:84] Creating CNI manager for ""
	I0408 22:56:32.905354   21772 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 22:56:32.905364   21772 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 22:56:32.905388   21772 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.234 APIServerPort:8441 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-546336 NodeName:functional-546336 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.234"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.234 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 22:56:32.905493   21772 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.234
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-546336"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.234"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.234"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 22:56:32.905580   21772 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0408 22:56:32.914548   21772 command_runner.go:130] > kubeadm
	I0408 22:56:32.914564   21772 command_runner.go:130] > kubectl
	I0408 22:56:32.914568   21772 command_runner.go:130] > kubelet
	I0408 22:56:32.914646   21772 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 22:56:32.914718   21772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 22:56:32.923150   21772 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0408 22:56:32.938212   21772 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 22:56:32.953395   21772 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I0408 22:56:32.968282   21772 ssh_runner.go:195] Run: grep 192.168.39.234	control-plane.minikube.internal$ /etc/hosts
	I0408 22:56:32.971857   21772 command_runner.go:130] > 192.168.39.234	control-plane.minikube.internal
	I0408 22:56:32.971923   21772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 22:56:33.097315   21772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 22:56:33.112048   21772 certs.go:68] Setting up /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336 for IP: 192.168.39.234
	I0408 22:56:33.112066   21772 certs.go:194] generating shared ca certs ...
	I0408 22:56:33.112083   21772 certs.go:226] acquiring lock for ca certs: {Name:mk0d455aae85017ac942481bbc1202ccedea144f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 22:56:33.112251   21772 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.key
	I0408 22:56:33.112294   21772 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.key
	I0408 22:56:33.112308   21772 certs.go:256] generating profile certs ...
	I0408 22:56:33.112383   21772 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/client.key
	I0408 22:56:33.112451   21772 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/apiserver.key.848fae18
	I0408 22:56:33.112486   21772 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/proxy-client.key
	I0408 22:56:33.112495   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0408 22:56:33.112506   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0408 22:56:33.112517   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0408 22:56:33.112526   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0408 22:56:33.112540   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0408 22:56:33.112552   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0408 22:56:33.112561   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0408 22:56:33.112572   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0408 22:56:33.112624   21772 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314.pem (1338 bytes)
	W0408 22:56:33.112665   21772 certs.go:480] ignoring /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314_empty.pem, impossibly tiny 0 bytes
	I0408 22:56:33.112678   21772 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 22:56:33.112704   21772 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem (1082 bytes)
	I0408 22:56:33.112735   21772 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem (1123 bytes)
	I0408 22:56:33.112774   21772 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem (1675 bytes)
	I0408 22:56:33.112819   21772 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem (1708 bytes)
	I0408 22:56:33.112860   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem -> /usr/share/ca-certificates/163142.pem
	I0408 22:56:33.112879   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0408 22:56:33.112897   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314.pem -> /usr/share/ca-certificates/16314.pem
	I0408 22:56:33.113475   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 22:56:33.137877   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 22:56:33.159070   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 22:56:33.185298   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0408 22:56:33.207770   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0408 22:56:33.228856   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0408 22:56:33.251027   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 22:56:33.272315   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 22:56:33.294625   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem --> /usr/share/ca-certificates/163142.pem (1708 bytes)
	I0408 22:56:33.316217   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 22:56:33.337786   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314.pem --> /usr/share/ca-certificates/16314.pem (1338 bytes)
	I0408 22:56:33.358722   21772 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0408 22:56:33.373131   21772 ssh_runner.go:195] Run: openssl version
	I0408 22:56:33.378702   21772 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0408 22:56:33.378755   21772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163142.pem && ln -fs /usr/share/ca-certificates/163142.pem /etc/ssl/certs/163142.pem"
	I0408 22:56:33.388262   21772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163142.pem
	I0408 22:56:33.392059   21772 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr  8 22:53 /usr/share/ca-certificates/163142.pem
	I0408 22:56:33.392090   21772 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 22:53 /usr/share/ca-certificates/163142.pem
	I0408 22:56:33.392135   21772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163142.pem
	I0408 22:56:33.397236   21772 command_runner.go:130] > 3ec20f2e
	I0408 22:56:33.397295   21772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163142.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 22:56:33.405382   21772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 22:56:33.414578   21772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 22:56:33.418346   21772 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr  8 22:46 /usr/share/ca-certificates/minikubeCA.pem
	I0408 22:56:33.418448   21772 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 22:46 /usr/share/ca-certificates/minikubeCA.pem
	I0408 22:56:33.418490   21772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 22:56:33.423400   21772 command_runner.go:130] > b5213941
	I0408 22:56:33.423452   21772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 22:56:33.431557   21772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16314.pem && ln -fs /usr/share/ca-certificates/16314.pem /etc/ssl/certs/16314.pem"
	I0408 22:56:33.442046   21772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16314.pem
	I0408 22:56:33.446095   21772 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr  8 22:53 /usr/share/ca-certificates/16314.pem
	I0408 22:56:33.446156   21772 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 22:53 /usr/share/ca-certificates/16314.pem
	I0408 22:56:33.446198   21772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16314.pem
	I0408 22:56:33.451257   21772 command_runner.go:130] > 51391683
	I0408 22:56:33.451490   21772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16314.pem /etc/ssl/certs/51391683.0"
	I0408 22:56:33.460149   21772 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 22:56:33.463927   21772 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 22:56:33.463942   21772 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0408 22:56:33.463948   21772 command_runner.go:130] > Device: 253,1	Inode: 7338542     Links: 1
	I0408 22:56:33.463973   21772 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0408 22:56:33.463986   21772 command_runner.go:130] > Access: 2025-04-08 22:54:04.578144403 +0000
	I0408 22:56:33.463994   21772 command_runner.go:130] > Modify: 2025-04-08 22:54:04.578144403 +0000
	I0408 22:56:33.464003   21772 command_runner.go:130] > Change: 2025-04-08 22:54:04.578144403 +0000
	I0408 22:56:33.464008   21772 command_runner.go:130] >  Birth: 2025-04-08 22:54:04.578144403 +0000
	I0408 22:56:33.464063   21772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 22:56:33.469050   21772 command_runner.go:130] > Certificate will not expire
	I0408 22:56:33.469263   21772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 22:56:33.474068   21772 command_runner.go:130] > Certificate will not expire
	I0408 22:56:33.474186   21772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 22:56:33.478955   21772 command_runner.go:130] > Certificate will not expire
	I0408 22:56:33.479120   21772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 22:56:33.484075   21772 command_runner.go:130] > Certificate will not expire
	I0408 22:56:33.484130   21772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 22:56:33.488910   21772 command_runner.go:130] > Certificate will not expire
	I0408 22:56:33.488951   21772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 22:56:33.493716   21772 command_runner.go:130] > Certificate will not expire
	I0408 22:56:33.493900   21772 kubeadm.go:392] StartCluster: {Name:functional-546336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-5463
36 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 22:56:33.493993   21772 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 22:56:33.494051   21772 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 22:56:33.531075   21772 command_runner.go:130] > f5383886ea44f313131ed72cfa49949bd8b5d2f4873b4e32d81e980ce2940fd0
	I0408 22:56:33.531123   21772 command_runner.go:130] > c7dc125c272a5b82818d376a354500ce1464ae56dbc755c0a0779f8c284bfc5c
	I0408 22:56:33.531134   21772 command_runner.go:130] > 0b6b87627794e032e13a615d5ea5b991ffef3843bb9ae6e9f153041eb733d782
	I0408 22:56:33.531145   21772 command_runner.go:130] > d19ba2e208f70c1259cbd17e3566fbc44b8b4d173fc3026591fa8cea6fad11ec
	I0408 22:56:33.531154   21772 command_runner.go:130] > a02e7488bb5d2cf6fe89c9af4932fa61408d59b64f5415e68dd23aef1e5f092c
	I0408 22:56:33.531170   21772 command_runner.go:130] > e50303177ff5685821e81bebd1db71a63709a33fc7ff89cb111d923979605c70
	I0408 22:56:33.531180   21772 command_runner.go:130] > d31c5cb795e76e9631c155d0b7e96672025f714ef26b9972bd87e759a40f7e4d
	I0408 22:56:33.531194   21772 command_runner.go:130] > 090c0b802b3a3a27f9446836815daacc48a4cfa1ed0b54043325e0eada99d664
	I0408 22:56:33.531207   21772 command_runner.go:130] > f5685f897beb5418ec57fb5f80ab70b0ffe4b406ecf635a6249b528e65cabfc4
	I0408 22:56:33.531221   21772 command_runner.go:130] > 31aa14fb57438b0d736b005aed16b4fb5438a2d6ce4af59f042b06d0271bcaa5
	I0408 22:56:33.531245   21772 cri.go:89] found id: "f5383886ea44f313131ed72cfa49949bd8b5d2f4873b4e32d81e980ce2940fd0"
	I0408 22:56:33.531257   21772 cri.go:89] found id: "c7dc125c272a5b82818d376a354500ce1464ae56dbc755c0a0779f8c284bfc5c"
	I0408 22:56:33.531266   21772 cri.go:89] found id: "0b6b87627794e032e13a615d5ea5b991ffef3843bb9ae6e9f153041eb733d782"
	I0408 22:56:33.531275   21772 cri.go:89] found id: "d19ba2e208f70c1259cbd17e3566fbc44b8b4d173fc3026591fa8cea6fad11ec"
	I0408 22:56:33.531284   21772 cri.go:89] found id: "a02e7488bb5d2cf6fe89c9af4932fa61408d59b64f5415e68dd23aef1e5f092c"
	I0408 22:56:33.531294   21772 cri.go:89] found id: "e50303177ff5685821e81bebd1db71a63709a33fc7ff89cb111d923979605c70"
	I0408 22:56:33.531302   21772 cri.go:89] found id: "d31c5cb795e76e9631c155d0b7e96672025f714ef26b9972bd87e759a40f7e4d"
	I0408 22:56:33.531308   21772 cri.go:89] found id: "090c0b802b3a3a27f9446836815daacc48a4cfa1ed0b54043325e0eada99d664"
	I0408 22:56:33.531312   21772 cri.go:89] found id: "f5685f897beb5418ec57fb5f80ab70b0ffe4b406ecf635a6249b528e65cabfc4"
	I0408 22:56:33.531318   21772 cri.go:89] found id: "31aa14fb57438b0d736b005aed16b4fb5438a2d6ce4af59f042b06d0271bcaa5"
	I0408 22:56:33.531323   21772 cri.go:89] found id: ""
	I0408 22:56:33.531374   21772 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-546336 -n functional-546336
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-546336 -n functional-546336: exit status 2 (226.809968ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-546336" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (1169.70s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (336.28s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-546336 get po -A
functional_test.go:713: (dbg) Non-zero exit: kubectl --context functional-546336 get po -A: exit status 1 (102.184429ms)

                                                
                                                
** stderr ** 
	E0408 23:14:23.544933   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.234:8441/api?timeout=32s\": dial tcp 192.168.39.234:8441: connect: connection refused"
	E0408 23:14:23.546541   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.234:8441/api?timeout=32s\": dial tcp 192.168.39.234:8441: connect: connection refused"
	E0408 23:14:23.548110   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.234:8441/api?timeout=32s\": dial tcp 192.168.39.234:8441: connect: connection refused"
	E0408 23:14:23.549684   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.234:8441/api?timeout=32s\": dial tcp 192.168.39.234:8441: connect: connection refused"
	E0408 23:14:23.551125   25966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.234:8441/api?timeout=32s\": dial tcp 192.168.39.234:8441: connect: connection refused"
	The connection to the server 192.168.39.234:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:715: failed to get kubectl pods: args "kubectl --context functional-546336 get po -A" : exit status 1
functional_test.go:719: expected stderr to be empty but got *"E0408 23:14:23.544933   25966 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.39.234:8441/api?timeout=32s\\\": dial tcp 192.168.39.234:8441: connect: connection refused\"\nE0408 23:14:23.546541   25966 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.39.234:8441/api?timeout=32s\\\": dial tcp 192.168.39.234:8441: connect: connection refused\"\nE0408 23:14:23.548110   25966 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.39.234:8441/api?timeout=32s\\\": dial tcp 192.168.39.234:8441: connect: connection refused\"\nE0408 23:14:23.549684   25966 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.39.234:8441/api?timeout=32s\\\": dial tcp 192.168.39.234:8441: connect: connection refused\"\nE0408 23:14:23.551
125   25966 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.39.234:8441/api?timeout=32s\\\": dial tcp 192.168.39.234:8441: connect: connection refused\"\nThe connection to the server 192.168.39.234:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-546336 get po -A"
functional_test.go:722: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-546336 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-546336 -n functional-546336
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-546336 -n functional-546336: exit status 2 (224.552378ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-546336 logs -n 25
E0408 23:17:57.941309   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-546336 logs -n 25: (5m35.679548379s)
helpers_test.go:252: TestFunctional/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| addons  | addons-355098 addons           | addons-355098     | jenkins | v1.35.0 | 08 Apr 25 22:49 UTC | 08 Apr 25 22:49 UTC |
	|         | disable csi-hostpath-driver    |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                   |         |         |                     |                     |
	| ip      | addons-355098 ip               | addons-355098     | jenkins | v1.35.0 | 08 Apr 25 22:51 UTC | 08 Apr 25 22:51 UTC |
	| addons  | addons-355098 addons disable   | addons-355098     | jenkins | v1.35.0 | 08 Apr 25 22:51 UTC | 08 Apr 25 22:51 UTC |
	|         | ingress-dns --alsologtostderr  |                   |         |         |                     |                     |
	|         | -v=1                           |                   |         |         |                     |                     |
	| addons  | addons-355098 addons disable   | addons-355098     | jenkins | v1.35.0 | 08 Apr 25 22:51 UTC | 08 Apr 25 22:51 UTC |
	|         | ingress --alsologtostderr -v=1 |                   |         |         |                     |                     |
	| stop    | -p addons-355098               | addons-355098     | jenkins | v1.35.0 | 08 Apr 25 22:51 UTC | 08 Apr 25 22:52 UTC |
	| addons  | enable dashboard -p            | addons-355098     | jenkins | v1.35.0 | 08 Apr 25 22:52 UTC | 08 Apr 25 22:52 UTC |
	|         | addons-355098                  |                   |         |         |                     |                     |
	| addons  | disable dashboard -p           | addons-355098     | jenkins | v1.35.0 | 08 Apr 25 22:52 UTC | 08 Apr 25 22:52 UTC |
	|         | addons-355098                  |                   |         |         |                     |                     |
	| addons  | disable gvisor -p              | addons-355098     | jenkins | v1.35.0 | 08 Apr 25 22:52 UTC | 08 Apr 25 22:52 UTC |
	|         | addons-355098                  |                   |         |         |                     |                     |
	| delete  | -p addons-355098               | addons-355098     | jenkins | v1.35.0 | 08 Apr 25 22:52 UTC | 08 Apr 25 22:52 UTC |
	| start   | -p nospam-715453 -n=1          | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:52 UTC | 08 Apr 25 22:53 UTC |
	|         | --memory=2250 --wait=false     |                   |         |         |                     |                     |
	|         | --log_dir=/tmp/nospam-715453   |                   |         |         |                     |                     |
	|         | --driver=kvm2                  |                   |         |         |                     |                     |
	|         | --container-runtime=crio       |                   |         |         |                     |                     |
	| start   | nospam-715453 --log_dir        | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC |                     |
	|         | /tmp/nospam-715453 start       |                   |         |         |                     |                     |
	|         | --dry-run                      |                   |         |         |                     |                     |
	| start   | nospam-715453 --log_dir        | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC |                     |
	|         | /tmp/nospam-715453 start       |                   |         |         |                     |                     |
	|         | --dry-run                      |                   |         |         |                     |                     |
	| start   | nospam-715453 --log_dir        | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC |                     |
	|         | /tmp/nospam-715453 start       |                   |         |         |                     |                     |
	|         | --dry-run                      |                   |         |         |                     |                     |
	| pause   | nospam-715453 --log_dir        | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 pause       |                   |         |         |                     |                     |
	| pause   | nospam-715453 --log_dir        | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 pause       |                   |         |         |                     |                     |
	| pause   | nospam-715453 --log_dir        | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 pause       |                   |         |         |                     |                     |
	| unpause | nospam-715453 --log_dir        | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 unpause     |                   |         |         |                     |                     |
	| unpause | nospam-715453 --log_dir        | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 unpause     |                   |         |         |                     |                     |
	| unpause | nospam-715453 --log_dir        | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 unpause     |                   |         |         |                     |                     |
	| stop    | nospam-715453 --log_dir        | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 stop        |                   |         |         |                     |                     |
	| stop    | nospam-715453 --log_dir        | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 stop        |                   |         |         |                     |                     |
	| stop    | nospam-715453 --log_dir        | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 stop        |                   |         |         |                     |                     |
	| delete  | -p nospam-715453               | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	| start   | -p functional-546336           | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:54 UTC |
	|         | --memory=4000                  |                   |         |         |                     |                     |
	|         | --apiserver-port=8441          |                   |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                   |         |         |                     |                     |
	|         | --container-runtime=crio       |                   |         |         |                     |                     |
	| start   | -p functional-546336           | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 22:54 UTC |                     |
	|         | --alsologtostderr -v=8         |                   |         |         |                     |                     |
	|---------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/08 22:54:53
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 22:54:53.750429   21772 out.go:345] Setting OutFile to fd 1 ...
	I0408 22:54:53.750673   21772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 22:54:53.750778   21772 out.go:358] Setting ErrFile to fd 2...
	I0408 22:54:53.750790   21772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 22:54:53.751041   21772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20501-9125/.minikube/bin
	I0408 22:54:53.751600   21772 out.go:352] Setting JSON to false
	I0408 22:54:53.752542   21772 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2239,"bootTime":1744150655,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 22:54:53.752628   21772 start.go:139] virtualization: kvm guest
	I0408 22:54:53.754529   21772 out.go:177] * [functional-546336] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 22:54:53.755700   21772 out.go:177]   - MINIKUBE_LOCATION=20501
	I0408 22:54:53.755700   21772 notify.go:220] Checking for updates...
	I0408 22:54:53.757645   21772 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 22:54:53.758881   21772 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20501-9125/kubeconfig
	I0408 22:54:53.760110   21772 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20501-9125/.minikube
	I0408 22:54:53.761221   21772 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 22:54:53.762262   21772 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 22:54:53.764007   21772 config.go:182] Loaded profile config "functional-546336": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 22:54:53.764090   21772 driver.go:404] Setting default libvirt URI to qemu:///system
	I0408 22:54:53.764531   21772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:54:53.764591   21772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:54:53.780528   21772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37317
	I0408 22:54:53.780962   21772 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:54:53.781388   21772 main.go:141] libmachine: Using API Version  1
	I0408 22:54:53.781409   21772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:54:53.781752   21772 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:54:53.781914   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:54:53.819375   21772 out.go:177] * Using the kvm2 driver based on existing profile
	I0408 22:54:53.820528   21772 start.go:297] selected driver: kvm2
	I0408 22:54:53.820538   21772 start.go:901] validating driver "kvm2" against &{Name:functional-546336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-546336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 22:54:53.820619   21772 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 22:54:53.820910   21772 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 22:54:53.820988   21772 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20501-9125/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 22:54:53.835403   21772 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0408 22:54:53.836289   21772 cni.go:84] Creating CNI manager for ""
	I0408 22:54:53.836343   21772 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 22:54:53.836403   21772 start.go:340] cluster config:
	{Name:functional-546336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-546336 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 22:54:53.836507   21772 iso.go:125] acquiring lock: {Name:mk618477bad490b102618c53c9c8c6b34f33ce81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 22:54:53.838584   21772 out.go:177] * Starting "functional-546336" primary control-plane node in "functional-546336" cluster
	I0408 22:54:53.839517   21772 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0408 22:54:53.839549   21772 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20501-9125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0408 22:54:53.839557   21772 cache.go:56] Caching tarball of preloaded images
	I0408 22:54:53.839620   21772 preload.go:172] Found /home/jenkins/minikube-integration/20501-9125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 22:54:53.839629   21772 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0408 22:54:53.839708   21772 profile.go:143] Saving config to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/config.json ...
	I0408 22:54:53.839890   21772 start.go:360] acquireMachinesLock for functional-546336: {Name:mke7be7b51cfddf557a39ecf6493fff6a1168ec9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 22:54:53.839934   21772 start.go:364] duration metric: took 24.616µs to acquireMachinesLock for "functional-546336"
	I0408 22:54:53.839951   21772 start.go:96] Skipping create...Using existing machine configuration
	I0408 22:54:53.839957   21772 fix.go:54] fixHost starting: 
	I0408 22:54:53.840198   21772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:54:53.840227   21772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:54:53.853842   21772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36351
	I0408 22:54:53.854248   21772 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:54:53.854642   21772 main.go:141] libmachine: Using API Version  1
	I0408 22:54:53.854660   21772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:54:53.854972   21772 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:54:53.855161   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:54:53.855314   21772 main.go:141] libmachine: (functional-546336) Calling .GetState
	I0408 22:54:53.856978   21772 fix.go:112] recreateIfNeeded on functional-546336: state=Running err=<nil>
	W0408 22:54:53.856995   21772 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 22:54:53.858448   21772 out.go:177] * Updating the running kvm2 "functional-546336" VM ...
	I0408 22:54:53.859370   21772 machine.go:93] provisionDockerMachine start ...
	I0408 22:54:53.859389   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:54:53.859573   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:54:53.861808   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:53.862195   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:53.862223   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:53.862331   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:54:53.862495   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:53.862642   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:53.862769   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:54:53.862913   21772 main.go:141] libmachine: Using SSH client type: native
	I0408 22:54:53.863111   21772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0408 22:54:53.863123   21772 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 22:54:53.975743   21772 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-546336
	
	I0408 22:54:53.975774   21772 main.go:141] libmachine: (functional-546336) Calling .GetMachineName
	I0408 22:54:53.976060   21772 buildroot.go:166] provisioning hostname "functional-546336"
	I0408 22:54:53.976090   21772 main.go:141] libmachine: (functional-546336) Calling .GetMachineName
	I0408 22:54:53.976275   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:54:53.978794   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:53.979136   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:53.979155   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:53.979343   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:54:53.979538   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:53.979686   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:53.979818   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:54:53.979975   21772 main.go:141] libmachine: Using SSH client type: native
	I0408 22:54:53.980186   21772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0408 22:54:53.980207   21772 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-546336 && echo "functional-546336" | sudo tee /etc/hostname
	I0408 22:54:54.107226   21772 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-546336
	
	I0408 22:54:54.107256   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:54:54.110121   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.110402   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:54.110442   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.110575   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:54:54.110737   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:54.110870   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:54.110984   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:54:54.111111   21772 main.go:141] libmachine: Using SSH client type: native
	I0408 22:54:54.111332   21772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0408 22:54:54.111355   21772 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-546336' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-546336/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-546336' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 22:54:54.224292   21772 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 22:54:54.224321   21772 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20501-9125/.minikube CaCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20501-9125/.minikube}
	I0408 22:54:54.224341   21772 buildroot.go:174] setting up certificates
	I0408 22:54:54.224352   21772 provision.go:84] configureAuth start
	I0408 22:54:54.224363   21772 main.go:141] libmachine: (functional-546336) Calling .GetMachineName
	I0408 22:54:54.224632   21772 main.go:141] libmachine: (functional-546336) Calling .GetIP
	I0408 22:54:54.227055   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.227343   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:54.227372   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.227496   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:54:54.229707   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.230025   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:54.230063   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.230204   21772 provision.go:143] copyHostCerts
	I0408 22:54:54.230228   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem
	I0408 22:54:54.230253   21772 exec_runner.go:144] found /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem, removing ...
	I0408 22:54:54.230267   21772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem
	I0408 22:54:54.230331   21772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem (1082 bytes)
	I0408 22:54:54.230397   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem
	I0408 22:54:54.230414   21772 exec_runner.go:144] found /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem, removing ...
	I0408 22:54:54.230421   21772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem
	I0408 22:54:54.230442   21772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem (1123 bytes)
	I0408 22:54:54.230555   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem
	I0408 22:54:54.230580   21772 exec_runner.go:144] found /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem, removing ...
	I0408 22:54:54.230584   21772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem
	I0408 22:54:54.230614   21772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem (1675 bytes)
	I0408 22:54:54.230663   21772 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem org=jenkins.functional-546336 san=[127.0.0.1 192.168.39.234 functional-546336 localhost minikube]
	I0408 22:54:54.377433   21772 provision.go:177] copyRemoteCerts
	I0408 22:54:54.377494   21772 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 22:54:54.377516   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:54:54.379910   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.380186   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:54.380208   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.380353   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:54:54.380512   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:54.380651   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:54:54.380759   21772 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/functional-546336/id_rsa Username:docker}
	I0408 22:54:54.469346   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0408 22:54:54.469406   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 22:54:54.492119   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0408 22:54:54.492170   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0408 22:54:54.515795   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0408 22:54:54.515854   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0408 22:54:54.538157   21772 provision.go:87] duration metric: took 313.794377ms to configureAuth
	I0408 22:54:54.538179   21772 buildroot.go:189] setting minikube options for container-runtime
	I0408 22:54:54.538348   21772 config.go:182] Loaded profile config "functional-546336": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 22:54:54.538415   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:54:54.540893   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.541189   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:54.541211   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.541388   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:54:54.541569   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:54.541794   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:54.541956   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:54:54.542154   21772 main.go:141] libmachine: Using SSH client type: native
	I0408 22:54:54.542410   21772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0408 22:54:54.542429   21772 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 22:55:00.049143   21772 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 22:55:00.049177   21772 machine.go:96] duration metric: took 6.189793928s to provisionDockerMachine
	I0408 22:55:00.049193   21772 start.go:293] postStartSetup for "functional-546336" (driver="kvm2")
	I0408 22:55:00.049216   21772 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 22:55:00.049238   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:55:00.049527   21772 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 22:55:00.049554   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:55:00.052053   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.052329   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:55:00.052357   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.052449   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:55:00.052621   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:55:00.052774   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:55:00.052915   21772 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/functional-546336/id_rsa Username:docker}
	I0408 22:55:00.137252   21772 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 22:55:00.140999   21772 command_runner.go:130] > NAME=Buildroot
	I0408 22:55:00.141018   21772 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0408 22:55:00.141022   21772 command_runner.go:130] > ID=buildroot
	I0408 22:55:00.141034   21772 command_runner.go:130] > VERSION_ID=2023.02.9
	I0408 22:55:00.141041   21772 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0408 22:55:00.141078   21772 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 22:55:00.141091   21772 filesync.go:126] Scanning /home/jenkins/minikube-integration/20501-9125/.minikube/addons for local assets ...
	I0408 22:55:00.141153   21772 filesync.go:126] Scanning /home/jenkins/minikube-integration/20501-9125/.minikube/files for local assets ...
	I0408 22:55:00.141241   21772 filesync.go:149] local asset: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem -> 163142.pem in /etc/ssl/certs
	I0408 22:55:00.141253   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem -> /etc/ssl/certs/163142.pem
	I0408 22:55:00.141327   21772 filesync.go:149] local asset: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/test/nested/copy/16314/hosts -> hosts in /etc/test/nested/copy/16314
	I0408 22:55:00.141336   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/test/nested/copy/16314/hosts -> /etc/test/nested/copy/16314/hosts
	I0408 22:55:00.141386   21772 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/16314
	I0408 22:55:00.149913   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem --> /etc/ssl/certs/163142.pem (1708 bytes)
	I0408 22:55:00.172587   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/test/nested/copy/16314/hosts --> /etc/test/nested/copy/16314/hosts (40 bytes)
	I0408 22:55:00.194320   21772 start.go:296] duration metric: took 145.104306ms for postStartSetup
	I0408 22:55:00.194353   21772 fix.go:56] duration metric: took 6.354395244s for fixHost
	I0408 22:55:00.194371   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:55:00.197105   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.197468   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:55:00.197508   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.197619   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:55:00.197806   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:55:00.197977   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:55:00.198135   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:55:00.198315   21772 main.go:141] libmachine: Using SSH client type: native
	I0408 22:55:00.198518   21772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0408 22:55:00.198529   21772 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 22:55:00.312401   21772 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744152900.293880637
	
	I0408 22:55:00.312424   21772 fix.go:216] guest clock: 1744152900.293880637
	I0408 22:55:00.312432   21772 fix.go:229] Guest: 2025-04-08 22:55:00.293880637 +0000 UTC Remote: 2025-04-08 22:55:00.194356923 +0000 UTC m=+6.478226412 (delta=99.523714ms)
	I0408 22:55:00.312463   21772 fix.go:200] guest clock delta is within tolerance: 99.523714ms
	I0408 22:55:00.312469   21772 start.go:83] releasing machines lock for "functional-546336", held for 6.472524067s
	I0408 22:55:00.312490   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:55:00.312723   21772 main.go:141] libmachine: (functional-546336) Calling .GetIP
	I0408 22:55:00.315235   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.315592   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:55:00.315620   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.315756   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:55:00.316286   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:55:00.316432   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:55:00.316535   21772 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 22:55:00.316574   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:55:00.316683   21772 ssh_runner.go:195] Run: cat /version.json
	I0408 22:55:00.316708   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:55:00.319048   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.319325   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.319354   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:55:00.319371   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.319522   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:55:00.319696   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:55:00.319776   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:55:00.319817   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.319891   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:55:00.319984   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:55:00.320037   21772 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/functional-546336/id_rsa Username:docker}
	I0408 22:55:00.320121   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:55:00.320259   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:55:00.320368   21772 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/functional-546336/id_rsa Username:docker}
	I0408 22:55:00.470604   21772 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0408 22:55:00.470683   21772 command_runner.go:130] > {"iso_version": "v1.35.0", "kicbase_version": "v0.0.45-1736763277-20236", "minikube_version": "v1.35.0", "commit": "3fb24bd87c8c8761e2515e1a9ee13835a389ed68"}
	I0408 22:55:00.470819   21772 ssh_runner.go:195] Run: systemctl --version
	I0408 22:55:00.499552   21772 command_runner.go:130] > systemd 252 (252)
	I0408 22:55:00.499604   21772 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0408 22:55:00.500041   21772 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 22:55:00.827340   21772 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0408 22:55:00.834963   21772 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0408 22:55:00.835008   21772 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 22:55:00.835072   21772 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 22:55:00.877281   21772 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0408 22:55:00.877304   21772 start.go:495] detecting cgroup driver to use...
	I0408 22:55:00.877378   21772 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 22:55:00.940318   21772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 22:55:01.008191   21772 docker.go:217] disabling cri-docker service (if available) ...
	I0408 22:55:01.008253   21772 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 22:55:01.030120   21772 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 22:55:01.062576   21772 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 22:55:01.269983   21772 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 22:55:01.496425   21772 docker.go:233] disabling docker service ...
	I0408 22:55:01.496502   21772 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 22:55:01.519064   21772 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 22:55:01.540326   21772 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 22:55:01.741595   21772 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 22:55:01.913173   21772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 22:55:01.927297   21772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 22:55:01.950625   21772 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0408 22:55:01.951000   21772 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0408 22:55:01.951058   21772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:01.962726   21772 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 22:55:01.962790   21772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:01.974651   21772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:01.985351   21772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:01.996381   21772 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 22:55:02.012061   21772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:02.024694   21772 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:02.036195   21772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:02.045483   21772 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 22:55:02.053886   21772 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0408 22:55:02.053960   21772 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 22:55:02.066815   21772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 22:55:02.213651   21772 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 22:56:32.679193   21772 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.465496752s)
	I0408 22:56:32.679231   21772 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 22:56:32.679281   21772 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 22:56:32.684914   21772 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0408 22:56:32.684956   21772 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0408 22:56:32.684981   21772 command_runner.go:130] > Device: 0,22	Inode: 1501        Links: 1
	I0408 22:56:32.684990   21772 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0408 22:56:32.684996   21772 command_runner.go:130] > Access: 2025-04-08 22:56:32.503580497 +0000
	I0408 22:56:32.685001   21772 command_runner.go:130] > Modify: 2025-04-08 22:56:32.503580497 +0000
	I0408 22:56:32.685010   21772 command_runner.go:130] > Change: 2025-04-08 22:56:32.503580497 +0000
	I0408 22:56:32.685013   21772 command_runner.go:130] >  Birth: -
	I0408 22:56:32.685205   21772 start.go:563] Will wait 60s for crictl version
	I0408 22:56:32.685262   21772 ssh_runner.go:195] Run: which crictl
	I0408 22:56:32.688828   21772 command_runner.go:130] > /usr/bin/crictl
	I0408 22:56:32.688893   21772 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 22:56:32.724970   21772 command_runner.go:130] > Version:  0.1.0
	I0408 22:56:32.724989   21772 command_runner.go:130] > RuntimeName:  cri-o
	I0408 22:56:32.724994   21772 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0408 22:56:32.724998   21772 command_runner.go:130] > RuntimeApiVersion:  v1
	I0408 22:56:32.725893   21772 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 22:56:32.725977   21772 ssh_runner.go:195] Run: crio --version
	I0408 22:56:32.752723   21772 command_runner.go:130] > crio version 1.29.1
	I0408 22:56:32.752740   21772 command_runner.go:130] > Version:        1.29.1
	I0408 22:56:32.752746   21772 command_runner.go:130] > GitCommit:      unknown
	I0408 22:56:32.752750   21772 command_runner.go:130] > GitCommitDate:  unknown
	I0408 22:56:32.752754   21772 command_runner.go:130] > GitTreeState:   clean
	I0408 22:56:32.752759   21772 command_runner.go:130] > BuildDate:      2025-01-14T08:57:58Z
	I0408 22:56:32.752763   21772 command_runner.go:130] > GoVersion:      go1.21.6
	I0408 22:56:32.752767   21772 command_runner.go:130] > Compiler:       gc
	I0408 22:56:32.752771   21772 command_runner.go:130] > Platform:       linux/amd64
	I0408 22:56:32.752775   21772 command_runner.go:130] > Linkmode:       dynamic
	I0408 22:56:32.752779   21772 command_runner.go:130] > BuildTags:      
	I0408 22:56:32.752783   21772 command_runner.go:130] >   containers_image_ostree_stub
	I0408 22:56:32.752787   21772 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0408 22:56:32.752791   21772 command_runner.go:130] >   btrfs_noversion
	I0408 22:56:32.752795   21772 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0408 22:56:32.752800   21772 command_runner.go:130] >   libdm_no_deferred_remove
	I0408 22:56:32.752804   21772 command_runner.go:130] >   seccomp
	I0408 22:56:32.752810   21772 command_runner.go:130] > LDFlags:          unknown
	I0408 22:56:32.752814   21772 command_runner.go:130] > SeccompEnabled:   true
	I0408 22:56:32.752818   21772 command_runner.go:130] > AppArmorEnabled:  false
	I0408 22:56:32.753859   21772 ssh_runner.go:195] Run: crio --version
	I0408 22:56:32.778913   21772 command_runner.go:130] > crio version 1.29.1
	I0408 22:56:32.778948   21772 command_runner.go:130] > Version:        1.29.1
	I0408 22:56:32.778957   21772 command_runner.go:130] > GitCommit:      unknown
	I0408 22:56:32.778962   21772 command_runner.go:130] > GitCommitDate:  unknown
	I0408 22:56:32.778967   21772 command_runner.go:130] > GitTreeState:   clean
	I0408 22:56:32.778975   21772 command_runner.go:130] > BuildDate:      2025-01-14T08:57:58Z
	I0408 22:56:32.778980   21772 command_runner.go:130] > GoVersion:      go1.21.6
	I0408 22:56:32.778986   21772 command_runner.go:130] > Compiler:       gc
	I0408 22:56:32.778993   21772 command_runner.go:130] > Platform:       linux/amd64
	I0408 22:56:32.779002   21772 command_runner.go:130] > Linkmode:       dynamic
	I0408 22:56:32.779012   21772 command_runner.go:130] > BuildTags:      
	I0408 22:56:32.779020   21772 command_runner.go:130] >   containers_image_ostree_stub
	I0408 22:56:32.779030   21772 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0408 22:56:32.779037   21772 command_runner.go:130] >   btrfs_noversion
	I0408 22:56:32.779048   21772 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0408 22:56:32.779056   21772 command_runner.go:130] >   libdm_no_deferred_remove
	I0408 22:56:32.779064   21772 command_runner.go:130] >   seccomp
	I0408 22:56:32.779072   21772 command_runner.go:130] > LDFlags:          unknown
	I0408 22:56:32.779080   21772 command_runner.go:130] > SeccompEnabled:   true
	I0408 22:56:32.779090   21772 command_runner.go:130] > AppArmorEnabled:  false
	I0408 22:56:32.780946   21772 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0408 22:56:32.782109   21772 main.go:141] libmachine: (functional-546336) Calling .GetIP
	I0408 22:56:32.785040   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:56:32.785454   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:56:32.785486   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:56:32.785755   21772 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 22:56:32.789792   21772 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0408 22:56:32.790053   21772 kubeadm.go:883] updating cluster {Name:functional-546336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-5
46336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 22:56:32.790145   21772 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0408 22:56:32.790182   21772 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 22:56:32.827503   21772 command_runner.go:130] > {
	I0408 22:56:32.827524   21772 command_runner.go:130] >   "images": [
	I0408 22:56:32.827528   21772 command_runner.go:130] >     {
	I0408 22:56:32.827537   21772 command_runner.go:130] >       "id": "d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56",
	I0408 22:56:32.827541   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.827547   21772 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241212-9f82dd49"
	I0408 22:56:32.827550   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827554   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.827561   21772 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26",
	I0408 22:56:32.827568   21772 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40"
	I0408 22:56:32.827572   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827576   21772 command_runner.go:130] >       "size": "95714353",
	I0408 22:56:32.827579   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.827583   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.827593   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.827600   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.827603   21772 command_runner.go:130] >     },
	I0408 22:56:32.827606   21772 command_runner.go:130] >     {
	I0408 22:56:32.827611   21772 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0408 22:56:32.827614   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.827620   21772 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0408 22:56:32.827624   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827627   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.827635   21772 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0408 22:56:32.827645   21772 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0408 22:56:32.827649   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827657   21772 command_runner.go:130] >       "size": "31470524",
	I0408 22:56:32.827663   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.827667   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.827670   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.827674   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.827677   21772 command_runner.go:130] >     },
	I0408 22:56:32.827681   21772 command_runner.go:130] >     {
	I0408 22:56:32.827689   21772 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0408 22:56:32.827692   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.827697   21772 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0408 22:56:32.827703   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827706   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.827713   21772 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0408 22:56:32.827720   21772 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0408 22:56:32.827724   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827727   21772 command_runner.go:130] >       "size": "63273227",
	I0408 22:56:32.827731   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.827737   21772 command_runner.go:130] >       "username": "nonroot",
	I0408 22:56:32.827740   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.827754   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.827759   21772 command_runner.go:130] >     },
	I0408 22:56:32.827766   21772 command_runner.go:130] >     {
	I0408 22:56:32.827773   21772 command_runner.go:130] >       "id": "a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc",
	I0408 22:56:32.827777   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.827782   21772 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.16-0"
	I0408 22:56:32.827785   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827791   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.827798   21772 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990",
	I0408 22:56:32.827811   21772 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"
	I0408 22:56:32.827816   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827820   21772 command_runner.go:130] >       "size": "151021823",
	I0408 22:56:32.827824   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.827830   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.827833   21772 command_runner.go:130] >       },
	I0408 22:56:32.827837   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.827840   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.827844   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.827846   21772 command_runner.go:130] >     },
	I0408 22:56:32.827850   21772 command_runner.go:130] >     {
	I0408 22:56:32.827858   21772 command_runner.go:130] >       "id": "85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef",
	I0408 22:56:32.827874   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.827882   21772 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.32.2"
	I0408 22:56:32.827890   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827896   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.827908   21772 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d",
	I0408 22:56:32.827916   21772 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"
	I0408 22:56:32.827922   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827925   21772 command_runner.go:130] >       "size": "98055648",
	I0408 22:56:32.827929   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.827932   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.827936   21772 command_runner.go:130] >       },
	I0408 22:56:32.827949   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.827954   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.827958   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.827966   21772 command_runner.go:130] >     },
	I0408 22:56:32.827970   21772 command_runner.go:130] >     {
	I0408 22:56:32.827976   21772 command_runner.go:130] >       "id": "b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389",
	I0408 22:56:32.827982   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.827987   21772 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.32.2"
	I0408 22:56:32.827993   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827996   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.828003   21772 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5",
	I0408 22:56:32.828013   21772 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90"
	I0408 22:56:32.828019   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828022   21772 command_runner.go:130] >       "size": "90793286",
	I0408 22:56:32.828026   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.828029   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.828033   21772 command_runner.go:130] >       },
	I0408 22:56:32.828036   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.828043   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.828046   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.828049   21772 command_runner.go:130] >     },
	I0408 22:56:32.828052   21772 command_runner.go:130] >     {
	I0408 22:56:32.828058   21772 command_runner.go:130] >       "id": "f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5",
	I0408 22:56:32.828064   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.828069   21772 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.32.2"
	I0408 22:56:32.828074   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828078   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.828085   21772 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d",
	I0408 22:56:32.828094   21772 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d"
	I0408 22:56:32.828097   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828102   21772 command_runner.go:130] >       "size": "95271321",
	I0408 22:56:32.828108   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.828111   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.828115   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.828119   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.828124   21772 command_runner.go:130] >     },
	I0408 22:56:32.828131   21772 command_runner.go:130] >     {
	I0408 22:56:32.828140   21772 command_runner.go:130] >       "id": "d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d",
	I0408 22:56:32.828144   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.828150   21772 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.32.2"
	I0408 22:56:32.828159   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828165   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.828207   21772 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76",
	I0408 22:56:32.828220   21772 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b"
	I0408 22:56:32.828223   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828227   21772 command_runner.go:130] >       "size": "70653254",
	I0408 22:56:32.828230   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.828233   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.828236   21772 command_runner.go:130] >       },
	I0408 22:56:32.828239   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.828243   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.828247   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.828250   21772 command_runner.go:130] >     },
	I0408 22:56:32.828253   21772 command_runner.go:130] >     {
	I0408 22:56:32.828259   21772 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0408 22:56:32.828265   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.828269   21772 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0408 22:56:32.828272   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828276   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.828283   21772 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0408 22:56:32.828292   21772 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0408 22:56:32.828295   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828298   21772 command_runner.go:130] >       "size": "742080",
	I0408 22:56:32.828302   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.828305   21772 command_runner.go:130] >         "value": "65535"
	I0408 22:56:32.828308   21772 command_runner.go:130] >       },
	I0408 22:56:32.828312   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.828318   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.828324   21772 command_runner.go:130] >       "pinned": true
	I0408 22:56:32.828331   21772 command_runner.go:130] >     }
	I0408 22:56:32.828334   21772 command_runner.go:130] >   ]
	I0408 22:56:32.828337   21772 command_runner.go:130] > }
	I0408 22:56:32.829120   21772 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 22:56:32.829135   21772 crio.go:433] Images already preloaded, skipping extraction
	I0408 22:56:32.829174   21772 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 22:56:32.860598   21772 command_runner.go:130] > {
	I0408 22:56:32.860616   21772 command_runner.go:130] >   "images": [
	I0408 22:56:32.860620   21772 command_runner.go:130] >     {
	I0408 22:56:32.860628   21772 command_runner.go:130] >       "id": "d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56",
	I0408 22:56:32.860632   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.860637   21772 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241212-9f82dd49"
	I0408 22:56:32.860641   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860645   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.860658   21772 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26",
	I0408 22:56:32.860666   21772 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40"
	I0408 22:56:32.860669   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860674   21772 command_runner.go:130] >       "size": "95714353",
	I0408 22:56:32.860677   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.860682   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.860690   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.860694   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.860699   21772 command_runner.go:130] >     },
	I0408 22:56:32.860702   21772 command_runner.go:130] >     {
	I0408 22:56:32.860708   21772 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0408 22:56:32.860712   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.860719   21772 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0408 22:56:32.860722   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860727   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.860734   21772 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0408 22:56:32.860742   21772 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0408 22:56:32.860746   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860752   21772 command_runner.go:130] >       "size": "31470524",
	I0408 22:56:32.860757   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.860761   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.860764   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.860768   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.860771   21772 command_runner.go:130] >     },
	I0408 22:56:32.860774   21772 command_runner.go:130] >     {
	I0408 22:56:32.860780   21772 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0408 22:56:32.860784   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.860789   21772 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0408 22:56:32.860793   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860797   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.860805   21772 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0408 22:56:32.860814   21772 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0408 22:56:32.860818   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860828   21772 command_runner.go:130] >       "size": "63273227",
	I0408 22:56:32.860834   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.860838   21772 command_runner.go:130] >       "username": "nonroot",
	I0408 22:56:32.860842   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.860848   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.860851   21772 command_runner.go:130] >     },
	I0408 22:56:32.860854   21772 command_runner.go:130] >     {
	I0408 22:56:32.860860   21772 command_runner.go:130] >       "id": "a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc",
	I0408 22:56:32.860866   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.860871   21772 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.16-0"
	I0408 22:56:32.860878   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860882   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.860891   21772 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990",
	I0408 22:56:32.860905   21772 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"
	I0408 22:56:32.860911   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860915   21772 command_runner.go:130] >       "size": "151021823",
	I0408 22:56:32.860921   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.860925   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.860931   21772 command_runner.go:130] >       },
	I0408 22:56:32.860946   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.860953   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.860957   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.860962   21772 command_runner.go:130] >     },
	I0408 22:56:32.860965   21772 command_runner.go:130] >     {
	I0408 22:56:32.860971   21772 command_runner.go:130] >       "id": "85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef",
	I0408 22:56:32.860977   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.860982   21772 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.32.2"
	I0408 22:56:32.860985   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860990   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.860997   21772 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d",
	I0408 22:56:32.861007   21772 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"
	I0408 22:56:32.861010   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861014   21772 command_runner.go:130] >       "size": "98055648",
	I0408 22:56:32.861024   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.861030   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.861033   21772 command_runner.go:130] >       },
	I0408 22:56:32.861037   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.861043   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.861046   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.861049   21772 command_runner.go:130] >     },
	I0408 22:56:32.861052   21772 command_runner.go:130] >     {
	I0408 22:56:32.861060   21772 command_runner.go:130] >       "id": "b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389",
	I0408 22:56:32.861064   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.861071   21772 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.32.2"
	I0408 22:56:32.861076   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861082   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.861090   21772 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5",
	I0408 22:56:32.861099   21772 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90"
	I0408 22:56:32.861103   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861106   21772 command_runner.go:130] >       "size": "90793286",
	I0408 22:56:32.861110   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.861114   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.861116   21772 command_runner.go:130] >       },
	I0408 22:56:32.861120   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.861126   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.861130   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.861133   21772 command_runner.go:130] >     },
	I0408 22:56:32.861136   21772 command_runner.go:130] >     {
	I0408 22:56:32.861143   21772 command_runner.go:130] >       "id": "f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5",
	I0408 22:56:32.861149   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.861153   21772 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.32.2"
	I0408 22:56:32.861158   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861162   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.861169   21772 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d",
	I0408 22:56:32.861178   21772 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d"
	I0408 22:56:32.861182   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861190   21772 command_runner.go:130] >       "size": "95271321",
	I0408 22:56:32.861196   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.861200   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.861204   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.861207   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.861210   21772 command_runner.go:130] >     },
	I0408 22:56:32.861213   21772 command_runner.go:130] >     {
	I0408 22:56:32.861219   21772 command_runner.go:130] >       "id": "d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d",
	I0408 22:56:32.861224   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.861229   21772 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.32.2"
	I0408 22:56:32.861234   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861238   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.861256   21772 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76",
	I0408 22:56:32.861266   21772 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b"
	I0408 22:56:32.861269   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861273   21772 command_runner.go:130] >       "size": "70653254",
	I0408 22:56:32.861275   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.861279   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.861282   21772 command_runner.go:130] >       },
	I0408 22:56:32.861286   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.861289   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.861293   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.861296   21772 command_runner.go:130] >     },
	I0408 22:56:32.861299   21772 command_runner.go:130] >     {
	I0408 22:56:32.861305   21772 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0408 22:56:32.861314   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.861319   21772 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0408 22:56:32.861322   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861325   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.861332   21772 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0408 22:56:32.861341   21772 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0408 22:56:32.861345   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861349   21772 command_runner.go:130] >       "size": "742080",
	I0408 22:56:32.861357   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.861364   21772 command_runner.go:130] >         "value": "65535"
	I0408 22:56:32.861367   21772 command_runner.go:130] >       },
	I0408 22:56:32.861370   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.861374   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.861380   21772 command_runner.go:130] >       "pinned": true
	I0408 22:56:32.861382   21772 command_runner.go:130] >     }
	I0408 22:56:32.861385   21772 command_runner.go:130] >   ]
	I0408 22:56:32.861388   21772 command_runner.go:130] > }
	I0408 22:56:32.862015   21772 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 22:56:32.862029   21772 cache_images.go:84] Images are preloaded, skipping loading
	I0408 22:56:32.862035   21772 kubeadm.go:934] updating node { 192.168.39.234 8441 v1.32.2 crio true true} ...
	I0408 22:56:32.862119   21772 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-546336 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.234
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:functional-546336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 22:56:32.862176   21772 ssh_runner.go:195] Run: crio config
	I0408 22:56:32.900028   21772 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0408 22:56:32.900049   21772 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0408 22:56:32.900055   21772 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0408 22:56:32.900058   21772 command_runner.go:130] > #
	I0408 22:56:32.900065   21772 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0408 22:56:32.900071   21772 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0408 22:56:32.900077   21772 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0408 22:56:32.900097   21772 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0408 22:56:32.900101   21772 command_runner.go:130] > # reload'.
	I0408 22:56:32.900107   21772 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0408 22:56:32.900113   21772 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0408 22:56:32.900120   21772 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0408 22:56:32.900130   21772 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0408 22:56:32.900135   21772 command_runner.go:130] > [crio]
	I0408 22:56:32.900144   21772 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0408 22:56:32.900152   21772 command_runner.go:130] > # containers images, in this directory.
	I0408 22:56:32.900158   21772 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0408 22:56:32.900171   21772 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0408 22:56:32.900182   21772 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0408 22:56:32.900190   21772 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0408 22:56:32.900199   21772 command_runner.go:130] > # imagestore = ""
	I0408 22:56:32.900205   21772 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0408 22:56:32.900213   21772 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0408 22:56:32.900221   21772 command_runner.go:130] > storage_driver = "overlay"
	I0408 22:56:32.900232   21772 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0408 22:56:32.900240   21772 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0408 22:56:32.900247   21772 command_runner.go:130] > storage_option = [
	I0408 22:56:32.900262   21772 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0408 22:56:32.900275   21772 command_runner.go:130] > ]
	I0408 22:56:32.900286   21772 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0408 22:56:32.900296   21772 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0408 22:56:32.900301   21772 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0408 22:56:32.900307   21772 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0408 22:56:32.900312   21772 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0408 22:56:32.900316   21772 command_runner.go:130] > # always happen on a node reboot
	I0408 22:56:32.900323   21772 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0408 22:56:32.900351   21772 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0408 22:56:32.900362   21772 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0408 22:56:32.900370   21772 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0408 22:56:32.900379   21772 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0408 22:56:32.900389   21772 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0408 22:56:32.900401   21772 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0408 22:56:32.900408   21772 command_runner.go:130] > # internal_wipe = true
	I0408 22:56:32.900421   21772 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0408 22:56:32.900433   21772 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0408 22:56:32.900445   21772 command_runner.go:130] > # internal_repair = false
	I0408 22:56:32.900456   21772 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0408 22:56:32.900465   21772 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0408 22:56:32.900477   21772 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0408 22:56:32.900488   21772 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0408 22:56:32.900500   21772 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0408 22:56:32.900506   21772 command_runner.go:130] > [crio.api]
	I0408 22:56:32.900514   21772 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0408 22:56:32.900524   21772 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0408 22:56:32.900532   21772 command_runner.go:130] > # IP address on which the stream server will listen.
	I0408 22:56:32.900539   21772 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0408 22:56:32.900549   21772 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0408 22:56:32.900559   21772 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0408 22:56:32.900565   21772 command_runner.go:130] > # stream_port = "0"
	I0408 22:56:32.900572   21772 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0408 22:56:32.900581   21772 command_runner.go:130] > # stream_enable_tls = false
	I0408 22:56:32.900589   21772 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0408 22:56:32.900593   21772 command_runner.go:130] > # stream_idle_timeout = ""
	I0408 22:56:32.900601   21772 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0408 22:56:32.900607   21772 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0408 22:56:32.900614   21772 command_runner.go:130] > # minutes.
	I0408 22:56:32.900620   21772 command_runner.go:130] > # stream_tls_cert = ""
	I0408 22:56:32.900631   21772 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0408 22:56:32.900649   21772 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0408 22:56:32.900658   21772 command_runner.go:130] > # stream_tls_key = ""
	I0408 22:56:32.900667   21772 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0408 22:56:32.900679   21772 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0408 22:56:32.900709   21772 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0408 22:56:32.900720   21772 command_runner.go:130] > # stream_tls_ca = ""
	I0408 22:56:32.900732   21772 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0408 22:56:32.900742   21772 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0408 22:56:32.900753   21772 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0408 22:56:32.900763   21772 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0408 22:56:32.900773   21772 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0408 22:56:32.900785   21772 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0408 22:56:32.900793   21772 command_runner.go:130] > [crio.runtime]
	I0408 22:56:32.900803   21772 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0408 22:56:32.900815   21772 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0408 22:56:32.900822   21772 command_runner.go:130] > # "nofile=1024:2048"
	I0408 22:56:32.900832   21772 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0408 22:56:32.900841   21772 command_runner.go:130] > # default_ulimits = [
	I0408 22:56:32.900847   21772 command_runner.go:130] > # ]
	I0408 22:56:32.900860   21772 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0408 22:56:32.900873   21772 command_runner.go:130] > # no_pivot = false
	I0408 22:56:32.900885   21772 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0408 22:56:32.900897   21772 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0408 22:56:32.900907   21772 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0408 22:56:32.900918   21772 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0408 22:56:32.900932   21772 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0408 22:56:32.900959   21772 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0408 22:56:32.900970   21772 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0408 22:56:32.900976   21772 command_runner.go:130] > # Cgroup setting for conmon
	I0408 22:56:32.900987   21772 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0408 22:56:32.900996   21772 command_runner.go:130] > conmon_cgroup = "pod"
	I0408 22:56:32.901006   21772 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0408 22:56:32.901017   21772 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0408 22:56:32.901030   21772 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0408 22:56:32.901038   21772 command_runner.go:130] > conmon_env = [
	I0408 22:56:32.901047   21772 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0408 22:56:32.901055   21772 command_runner.go:130] > ]
	I0408 22:56:32.901064   21772 command_runner.go:130] > # Additional environment variables to set for all the
	I0408 22:56:32.901075   21772 command_runner.go:130] > # containers. These are overridden if set in the
	I0408 22:56:32.901087   21772 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0408 22:56:32.901094   21772 command_runner.go:130] > # default_env = [
	I0408 22:56:32.901103   21772 command_runner.go:130] > # ]
	I0408 22:56:32.901111   21772 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0408 22:56:32.901125   21772 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0408 22:56:32.901134   21772 command_runner.go:130] > # selinux = false
	I0408 22:56:32.901143   21772 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0408 22:56:32.901155   21772 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0408 22:56:32.901167   21772 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0408 22:56:32.901177   21772 command_runner.go:130] > # seccomp_profile = ""
	I0408 22:56:32.901186   21772 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0408 22:56:32.901197   21772 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0408 22:56:32.901207   21772 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0408 22:56:32.901217   21772 command_runner.go:130] > # which might increase security.
	I0408 22:56:32.901225   21772 command_runner.go:130] > # This option is currently deprecated,
	I0408 22:56:32.901237   21772 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0408 22:56:32.901255   21772 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0408 22:56:32.901268   21772 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0408 22:56:32.901288   21772 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0408 22:56:32.901314   21772 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0408 22:56:32.901327   21772 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0408 22:56:32.901335   21772 command_runner.go:130] > # This option supports live configuration reload.
	I0408 22:56:32.901345   21772 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0408 22:56:32.901353   21772 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0408 22:56:32.901362   21772 command_runner.go:130] > # the cgroup blockio controller.
	I0408 22:56:32.901369   21772 command_runner.go:130] > # blockio_config_file = ""
	I0408 22:56:32.901382   21772 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0408 22:56:32.901388   21772 command_runner.go:130] > # blockio parameters.
	I0408 22:56:32.901397   21772 command_runner.go:130] > # blockio_reload = false
	I0408 22:56:32.901407   21772 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0408 22:56:32.901414   21772 command_runner.go:130] > # irqbalance daemon.
	I0408 22:56:32.901419   21772 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0408 22:56:32.901425   21772 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0408 22:56:32.901431   21772 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0408 22:56:32.901438   21772 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0408 22:56:32.901446   21772 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0408 22:56:32.901454   21772 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0408 22:56:32.901461   21772 command_runner.go:130] > # This option supports live configuration reload.
	I0408 22:56:32.901468   21772 command_runner.go:130] > # rdt_config_file = ""
	I0408 22:56:32.901476   21772 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0408 22:56:32.901483   21772 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0408 22:56:32.901522   21772 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0408 22:56:32.901531   21772 command_runner.go:130] > # separate_pull_cgroup = ""
	I0408 22:56:32.901538   21772 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0408 22:56:32.901549   21772 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0408 22:56:32.901555   21772 command_runner.go:130] > # will be added.
	I0408 22:56:32.901562   21772 command_runner.go:130] > # default_capabilities = [
	I0408 22:56:32.901571   21772 command_runner.go:130] > # 	"CHOWN",
	I0408 22:56:32.901577   21772 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0408 22:56:32.901585   21772 command_runner.go:130] > # 	"FSETID",
	I0408 22:56:32.901590   21772 command_runner.go:130] > # 	"FOWNER",
	I0408 22:56:32.901596   21772 command_runner.go:130] > # 	"SETGID",
	I0408 22:56:32.901609   21772 command_runner.go:130] > # 	"SETUID",
	I0408 22:56:32.901618   21772 command_runner.go:130] > # 	"SETPCAP",
	I0408 22:56:32.901622   21772 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0408 22:56:32.901628   21772 command_runner.go:130] > # 	"KILL",
	I0408 22:56:32.901632   21772 command_runner.go:130] > # ]
	I0408 22:56:32.901643   21772 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0408 22:56:32.901657   21772 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0408 22:56:32.901671   21772 command_runner.go:130] > # add_inheritable_capabilities = false
	I0408 22:56:32.901681   21772 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0408 22:56:32.901693   21772 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0408 22:56:32.901702   21772 command_runner.go:130] > default_sysctls = [
	I0408 22:56:32.901710   21772 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0408 22:56:32.901718   21772 command_runner.go:130] > ]
	I0408 22:56:32.901725   21772 command_runner.go:130] > # List of devices on the host that a
	I0408 22:56:32.901738   21772 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0408 22:56:32.901744   21772 command_runner.go:130] > # allowed_devices = [
	I0408 22:56:32.901753   21772 command_runner.go:130] > # 	"/dev/fuse",
	I0408 22:56:32.901759   21772 command_runner.go:130] > # ]
	I0408 22:56:32.901768   21772 command_runner.go:130] > # List of additional devices. specified as
	I0408 22:56:32.901782   21772 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0408 22:56:32.901793   21772 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0408 22:56:32.901802   21772 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0408 22:56:32.901811   21772 command_runner.go:130] > # additional_devices = [
	I0408 22:56:32.901816   21772 command_runner.go:130] > # ]
	I0408 22:56:32.901827   21772 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0408 22:56:32.901834   21772 command_runner.go:130] > # cdi_spec_dirs = [
	I0408 22:56:32.901842   21772 command_runner.go:130] > # 	"/etc/cdi",
	I0408 22:56:32.901848   21772 command_runner.go:130] > # 	"/var/run/cdi",
	I0408 22:56:32.901856   21772 command_runner.go:130] > # ]
	I0408 22:56:32.901866   21772 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0408 22:56:32.901878   21772 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0408 22:56:32.901885   21772 command_runner.go:130] > # Defaults to false.
	I0408 22:56:32.901891   21772 command_runner.go:130] > # device_ownership_from_security_context = false
	I0408 22:56:32.901909   21772 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0408 22:56:32.901922   21772 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0408 22:56:32.901928   21772 command_runner.go:130] > # hooks_dir = [
	I0408 22:56:32.901936   21772 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0408 22:56:32.901950   21772 command_runner.go:130] > # ]
	I0408 22:56:32.901959   21772 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0408 22:56:32.901970   21772 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0408 22:56:32.901979   21772 command_runner.go:130] > # its default mounts from the following two files:
	I0408 22:56:32.901990   21772 command_runner.go:130] > #
	I0408 22:56:32.902004   21772 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0408 22:56:32.902015   21772 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0408 22:56:32.902024   21772 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0408 22:56:32.902033   21772 command_runner.go:130] > #
	I0408 22:56:32.902042   21772 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0408 22:56:32.902054   21772 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0408 22:56:32.902067   21772 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0408 22:56:32.902078   21772 command_runner.go:130] > #      only add mounts it finds in this file.
	I0408 22:56:32.902083   21772 command_runner.go:130] > #
	I0408 22:56:32.902092   21772 command_runner.go:130] > # default_mounts_file = ""
	I0408 22:56:32.902103   21772 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0408 22:56:32.902115   21772 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0408 22:56:32.902125   21772 command_runner.go:130] > pids_limit = 1024
	I0408 22:56:32.902135   21772 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0408 22:56:32.902144   21772 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0408 22:56:32.902151   21772 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0408 22:56:32.902166   21772 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0408 22:56:32.902177   21772 command_runner.go:130] > # log_size_max = -1
	I0408 22:56:32.902187   21772 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0408 22:56:32.902194   21772 command_runner.go:130] > # log_to_journald = false
	I0408 22:56:32.902206   21772 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0408 22:56:32.902216   21772 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0408 22:56:32.902224   21772 command_runner.go:130] > # Path to directory for container attach sockets.
	I0408 22:56:32.902234   21772 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0408 22:56:32.902254   21772 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0408 22:56:32.902264   21772 command_runner.go:130] > # bind_mount_prefix = ""
	I0408 22:56:32.902272   21772 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0408 22:56:32.902281   21772 command_runner.go:130] > # read_only = false
	I0408 22:56:32.902290   21772 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0408 22:56:32.902303   21772 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0408 22:56:32.902311   21772 command_runner.go:130] > # live configuration reload.
	I0408 22:56:32.902315   21772 command_runner.go:130] > # log_level = "info"
	I0408 22:56:32.902325   21772 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0408 22:56:32.902334   21772 command_runner.go:130] > # This option supports live configuration reload.
	I0408 22:56:32.902343   21772 command_runner.go:130] > # log_filter = ""
	I0408 22:56:32.902352   21772 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0408 22:56:32.902366   21772 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0408 22:56:32.902373   21772 command_runner.go:130] > # separated by comma.
	I0408 22:56:32.902387   21772 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0408 22:56:32.902396   21772 command_runner.go:130] > # uid_mappings = ""
	I0408 22:56:32.902405   21772 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0408 22:56:32.902417   21772 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0408 22:56:32.902427   21772 command_runner.go:130] > # separated by comma.
	I0408 22:56:32.902442   21772 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0408 22:56:32.902450   21772 command_runner.go:130] > # gid_mappings = ""
	I0408 22:56:32.902459   21772 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0408 22:56:32.902472   21772 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0408 22:56:32.902481   21772 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0408 22:56:32.902489   21772 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0408 22:56:32.902499   21772 command_runner.go:130] > # minimum_mappable_uid = -1
	I0408 22:56:32.902508   21772 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0408 22:56:32.902521   21772 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0408 22:56:32.902533   21772 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0408 22:56:32.902545   21772 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0408 22:56:32.902554   21772 command_runner.go:130] > # minimum_mappable_gid = -1
	I0408 22:56:32.902563   21772 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0408 22:56:32.902571   21772 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0408 22:56:32.902584   21772 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0408 22:56:32.902595   21772 command_runner.go:130] > # ctr_stop_timeout = 30
	I0408 22:56:32.902608   21772 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0408 22:56:32.902619   21772 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0408 22:56:32.902629   21772 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0408 22:56:32.902637   21772 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0408 22:56:32.902646   21772 command_runner.go:130] > drop_infra_ctr = false
	I0408 22:56:32.902653   21772 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0408 22:56:32.902661   21772 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0408 22:56:32.902672   21772 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0408 22:56:32.902683   21772 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0408 22:56:32.902696   21772 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0408 22:56:32.902708   21772 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0408 22:56:32.902719   21772 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0408 22:56:32.902730   21772 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0408 22:56:32.902735   21772 command_runner.go:130] > # shared_cpuset = ""
	I0408 22:56:32.902740   21772 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0408 22:56:32.902747   21772 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0408 22:56:32.902753   21772 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0408 22:56:32.902767   21772 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0408 22:56:32.902777   21772 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0408 22:56:32.902789   21772 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0408 22:56:32.902801   21772 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0408 22:56:32.902811   21772 command_runner.go:130] > # enable_criu_support = false
	I0408 22:56:32.902820   21772 command_runner.go:130] > # Enable/disable the generation of the container,
	I0408 22:56:32.902826   21772 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0408 22:56:32.902834   21772 command_runner.go:130] > # enable_pod_events = false
	I0408 22:56:32.902844   21772 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0408 22:56:32.902857   21772 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0408 22:56:32.902867   21772 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0408 22:56:32.902873   21772 command_runner.go:130] > # default_runtime = "runc"
	I0408 22:56:32.902884   21772 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0408 22:56:32.902897   21772 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0408 22:56:32.902917   21772 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0408 22:56:32.902928   21772 command_runner.go:130] > # creation as a file is not desired either.
	I0408 22:56:32.902945   21772 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0408 22:56:32.902956   21772 command_runner.go:130] > # the hostname is being managed dynamically.
	I0408 22:56:32.902962   21772 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0408 22:56:32.902970   21772 command_runner.go:130] > # ]
	I0408 22:56:32.902983   21772 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0408 22:56:32.902993   21772 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0408 22:56:32.903002   21772 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0408 22:56:32.903013   21772 command_runner.go:130] > # Each entry in the table should follow the format:
	I0408 22:56:32.903022   21772 command_runner.go:130] > #
	I0408 22:56:32.903029   21772 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0408 22:56:32.903039   21772 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0408 22:56:32.903114   21772 command_runner.go:130] > # runtime_type = "oci"
	I0408 22:56:32.903129   21772 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0408 22:56:32.903136   21772 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0408 22:56:32.903142   21772 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0408 22:56:32.903150   21772 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0408 22:56:32.903156   21772 command_runner.go:130] > # monitor_env = []
	I0408 22:56:32.903164   21772 command_runner.go:130] > # privileged_without_host_devices = false
	I0408 22:56:32.903171   21772 command_runner.go:130] > # allowed_annotations = []
	I0408 22:56:32.903177   21772 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0408 22:56:32.903186   21772 command_runner.go:130] > # Where:
	I0408 22:56:32.903195   21772 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0408 22:56:32.903207   21772 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0408 22:56:32.903220   21772 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0408 22:56:32.903235   21772 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0408 22:56:32.903243   21772 command_runner.go:130] > #   in $PATH.
	I0408 22:56:32.903253   21772 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0408 22:56:32.903260   21772 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0408 22:56:32.903267   21772 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0408 22:56:32.903275   21772 command_runner.go:130] > #   state.
	I0408 22:56:32.903291   21772 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0408 22:56:32.903308   21772 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0408 22:56:32.903321   21772 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0408 22:56:32.903329   21772 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0408 22:56:32.903340   21772 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0408 22:56:32.903348   21772 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0408 22:56:32.903355   21772 command_runner.go:130] > #   The currently recognized values are:
	I0408 22:56:32.903368   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0408 22:56:32.903382   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0408 22:56:32.903394   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0408 22:56:32.903404   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0408 22:56:32.903418   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0408 22:56:32.903429   21772 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0408 22:56:32.903443   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0408 22:56:32.903456   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0408 22:56:32.903467   21772 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0408 22:56:32.903479   21772 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0408 22:56:32.903489   21772 command_runner.go:130] > #   deprecated option "conmon".
	I0408 22:56:32.903501   21772 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0408 22:56:32.903513   21772 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0408 22:56:32.903527   21772 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0408 22:56:32.903538   21772 command_runner.go:130] > #   should be moved to the container's cgroup
	I0408 22:56:32.903548   21772 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0408 22:56:32.903557   21772 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0408 22:56:32.903568   21772 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0408 22:56:32.903577   21772 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0408 22:56:32.903580   21772 command_runner.go:130] > #
	I0408 22:56:32.903588   21772 command_runner.go:130] > # Using the seccomp notifier feature:
	I0408 22:56:32.903595   21772 command_runner.go:130] > #
	I0408 22:56:32.903604   21772 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0408 22:56:32.903618   21772 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0408 22:56:32.903622   21772 command_runner.go:130] > #
	I0408 22:56:32.903632   21772 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0408 22:56:32.903644   21772 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0408 22:56:32.903657   21772 command_runner.go:130] > #
	I0408 22:56:32.903669   21772 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0408 22:56:32.903678   21772 command_runner.go:130] > # feature.
	I0408 22:56:32.903682   21772 command_runner.go:130] > #
	I0408 22:56:32.903694   21772 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0408 22:56:32.903706   21772 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0408 22:56:32.903718   21772 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0408 22:56:32.903728   21772 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0408 22:56:32.903739   21772 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0408 22:56:32.903747   21772 command_runner.go:130] > #
	I0408 22:56:32.903756   21772 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0408 22:56:32.903766   21772 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0408 22:56:32.903769   21772 command_runner.go:130] > #
	I0408 22:56:32.903777   21772 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0408 22:56:32.903789   21772 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0408 22:56:32.903797   21772 command_runner.go:130] > #
	I0408 22:56:32.903805   21772 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0408 22:56:32.903816   21772 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0408 22:56:32.903828   21772 command_runner.go:130] > # limitation.
	I0408 22:56:32.903839   21772 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0408 22:56:32.903846   21772 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0408 22:56:32.903850   21772 command_runner.go:130] > runtime_type = "oci"
	I0408 22:56:32.903854   21772 command_runner.go:130] > runtime_root = "/run/runc"
	I0408 22:56:32.903860   21772 command_runner.go:130] > runtime_config_path = ""
	I0408 22:56:32.903881   21772 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0408 22:56:32.903890   21772 command_runner.go:130] > monitor_cgroup = "pod"
	I0408 22:56:32.903896   21772 command_runner.go:130] > monitor_exec_cgroup = ""
	I0408 22:56:32.903905   21772 command_runner.go:130] > monitor_env = [
	I0408 22:56:32.903914   21772 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0408 22:56:32.903922   21772 command_runner.go:130] > ]
	I0408 22:56:32.903929   21772 command_runner.go:130] > privileged_without_host_devices = false
	I0408 22:56:32.903943   21772 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0408 22:56:32.903954   21772 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0408 22:56:32.903974   21772 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0408 22:56:32.903992   21772 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0408 22:56:32.904007   21772 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0408 22:56:32.904018   21772 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0408 22:56:32.904031   21772 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0408 22:56:32.904046   21772 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0408 22:56:32.904059   21772 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0408 22:56:32.904070   21772 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0408 22:56:32.904078   21772 command_runner.go:130] > # Example:
	I0408 22:56:32.904085   21772 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0408 22:56:32.904096   21772 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0408 22:56:32.904104   21772 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0408 22:56:32.904109   21772 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0408 22:56:32.904116   21772 command_runner.go:130] > # cpuset = 0
	I0408 22:56:32.904122   21772 command_runner.go:130] > # cpushares = "0-1"
	I0408 22:56:32.904131   21772 command_runner.go:130] > # Where:
	I0408 22:56:32.904138   21772 command_runner.go:130] > # The workload name is workload-type.
	I0408 22:56:32.904151   21772 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0408 22:56:32.904162   21772 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0408 22:56:32.904171   21772 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0408 22:56:32.904185   21772 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0408 22:56:32.904195   21772 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0408 22:56:32.904202   21772 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0408 22:56:32.904216   21772 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0408 22:56:32.904226   21772 command_runner.go:130] > # Default value is set to true
	I0408 22:56:32.904232   21772 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0408 22:56:32.904244   21772 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0408 22:56:32.904253   21772 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0408 22:56:32.904260   21772 command_runner.go:130] > # Default value is set to 'false'
	I0408 22:56:32.904267   21772 command_runner.go:130] > # disable_hostport_mapping = false
	I0408 22:56:32.904275   21772 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0408 22:56:32.904280   21772 command_runner.go:130] > #
	I0408 22:56:32.904288   21772 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0408 22:56:32.904307   21772 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0408 22:56:32.904322   21772 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0408 22:56:32.904335   21772 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0408 22:56:32.904349   21772 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0408 22:56:32.904357   21772 command_runner.go:130] > [crio.image]
	I0408 22:56:32.904363   21772 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0408 22:56:32.904371   21772 command_runner.go:130] > # default_transport = "docker://"
	I0408 22:56:32.904382   21772 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0408 22:56:32.904394   21772 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0408 22:56:32.904404   21772 command_runner.go:130] > # global_auth_file = ""
	I0408 22:56:32.904411   21772 command_runner.go:130] > # The image used to instantiate infra containers.
	I0408 22:56:32.904421   21772 command_runner.go:130] > # This option supports live configuration reload.
	I0408 22:56:32.904431   21772 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0408 22:56:32.904441   21772 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0408 22:56:32.904449   21772 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0408 22:56:32.904454   21772 command_runner.go:130] > # This option supports live configuration reload.
	I0408 22:56:32.904459   21772 command_runner.go:130] > # pause_image_auth_file = ""
	I0408 22:56:32.904464   21772 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0408 22:56:32.904472   21772 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0408 22:56:32.904481   21772 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0408 22:56:32.904494   21772 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0408 22:56:32.904503   21772 command_runner.go:130] > # pause_command = "/pause"
	I0408 22:56:32.904511   21772 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0408 22:56:32.904551   21772 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0408 22:56:32.904556   21772 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0408 22:56:32.904564   21772 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0408 22:56:32.904569   21772 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0408 22:56:32.904578   21772 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0408 22:56:32.904584   21772 command_runner.go:130] > # pinned_images = [
	I0408 22:56:32.904592   21772 command_runner.go:130] > # ]
	I0408 22:56:32.904600   21772 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0408 22:56:32.904607   21772 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0408 22:56:32.904615   21772 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0408 22:56:32.904629   21772 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0408 22:56:32.904642   21772 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0408 22:56:32.904651   21772 command_runner.go:130] > # signature_policy = ""
	I0408 22:56:32.904660   21772 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0408 22:56:32.904672   21772 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0408 22:56:32.904681   21772 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0408 22:56:32.904694   21772 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0408 22:56:32.904702   21772 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0408 22:56:32.904707   21772 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0408 22:56:32.904714   21772 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0408 22:56:32.904720   21772 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0408 22:56:32.904723   21772 command_runner.go:130] > # changing them here.
	I0408 22:56:32.904726   21772 command_runner.go:130] > # insecure_registries = [
	I0408 22:56:32.904729   21772 command_runner.go:130] > # ]
	I0408 22:56:32.904735   21772 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0408 22:56:32.904739   21772 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0408 22:56:32.904743   21772 command_runner.go:130] > # image_volumes = "mkdir"
	I0408 22:56:32.904747   21772 command_runner.go:130] > # Temporary directory to use for storing big files
	I0408 22:56:32.904751   21772 command_runner.go:130] > # big_files_temporary_dir = ""
	I0408 22:56:32.904756   21772 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0408 22:56:32.904760   21772 command_runner.go:130] > # CNI plugins.
	I0408 22:56:32.904763   21772 command_runner.go:130] > [crio.network]
	I0408 22:56:32.904768   21772 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0408 22:56:32.904773   21772 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0408 22:56:32.904777   21772 command_runner.go:130] > # cni_default_network = ""
	I0408 22:56:32.904782   21772 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0408 22:56:32.904786   21772 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0408 22:56:32.904791   21772 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0408 22:56:32.904794   21772 command_runner.go:130] > # plugin_dirs = [
	I0408 22:56:32.904798   21772 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0408 22:56:32.904800   21772 command_runner.go:130] > # ]
	I0408 22:56:32.904805   21772 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0408 22:56:32.904809   21772 command_runner.go:130] > [crio.metrics]
	I0408 22:56:32.904818   21772 command_runner.go:130] > # Globally enable or disable metrics support.
	I0408 22:56:32.904821   21772 command_runner.go:130] > enable_metrics = true
	I0408 22:56:32.904825   21772 command_runner.go:130] > # Specify enabled metrics collectors.
	I0408 22:56:32.904829   21772 command_runner.go:130] > # Per default all metrics are enabled.
	I0408 22:56:32.904834   21772 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0408 22:56:32.904840   21772 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0408 22:56:32.904847   21772 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0408 22:56:32.904853   21772 command_runner.go:130] > # metrics_collectors = [
	I0408 22:56:32.904859   21772 command_runner.go:130] > # 	"operations",
	I0408 22:56:32.904866   21772 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0408 22:56:32.904871   21772 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0408 22:56:32.904875   21772 command_runner.go:130] > # 	"operations_errors",
	I0408 22:56:32.904879   21772 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0408 22:56:32.904882   21772 command_runner.go:130] > # 	"image_pulls_by_name",
	I0408 22:56:32.904888   21772 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0408 22:56:32.904892   21772 command_runner.go:130] > # 	"image_pulls_failures",
	I0408 22:56:32.904895   21772 command_runner.go:130] > # 	"image_pulls_successes",
	I0408 22:56:32.904899   21772 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0408 22:56:32.904903   21772 command_runner.go:130] > # 	"image_layer_reuse",
	I0408 22:56:32.904907   21772 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0408 22:56:32.904911   21772 command_runner.go:130] > # 	"containers_oom_total",
	I0408 22:56:32.904915   21772 command_runner.go:130] > # 	"containers_oom",
	I0408 22:56:32.904918   21772 command_runner.go:130] > # 	"processes_defunct",
	I0408 22:56:32.904922   21772 command_runner.go:130] > # 	"operations_total",
	I0408 22:56:32.904929   21772 command_runner.go:130] > # 	"operations_latency_seconds",
	I0408 22:56:32.904933   21772 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0408 22:56:32.904937   21772 command_runner.go:130] > # 	"operations_errors_total",
	I0408 22:56:32.904947   21772 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0408 22:56:32.904955   21772 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0408 22:56:32.904959   21772 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0408 22:56:32.904963   21772 command_runner.go:130] > # 	"image_pulls_success_total",
	I0408 22:56:32.904967   21772 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0408 22:56:32.904971   21772 command_runner.go:130] > # 	"containers_oom_count_total",
	I0408 22:56:32.904981   21772 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0408 22:56:32.904988   21772 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0408 22:56:32.904991   21772 command_runner.go:130] > # ]
	I0408 22:56:32.905000   21772 command_runner.go:130] > # The port on which the metrics server will listen.
	I0408 22:56:32.905006   21772 command_runner.go:130] > # metrics_port = 9090
	I0408 22:56:32.905011   21772 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0408 22:56:32.905014   21772 command_runner.go:130] > # metrics_socket = ""
	I0408 22:56:32.905019   21772 command_runner.go:130] > # The certificate for the secure metrics server.
	I0408 22:56:32.905024   21772 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0408 22:56:32.905033   21772 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0408 22:56:32.905037   21772 command_runner.go:130] > # certificate on any modification event.
	I0408 22:56:32.905043   21772 command_runner.go:130] > # metrics_cert = ""
	I0408 22:56:32.905048   21772 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0408 22:56:32.905052   21772 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0408 22:56:32.905058   21772 command_runner.go:130] > # metrics_key = ""
	I0408 22:56:32.905064   21772 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0408 22:56:32.905070   21772 command_runner.go:130] > [crio.tracing]
	I0408 22:56:32.905075   21772 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0408 22:56:32.905079   21772 command_runner.go:130] > # enable_tracing = false
	I0408 22:56:32.905087   21772 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0408 22:56:32.905091   21772 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0408 22:56:32.905097   21772 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0408 22:56:32.905104   21772 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0408 22:56:32.905108   21772 command_runner.go:130] > # CRI-O NRI configuration.
	I0408 22:56:32.905113   21772 command_runner.go:130] > [crio.nri]
	I0408 22:56:32.905117   21772 command_runner.go:130] > # Globally enable or disable NRI.
	I0408 22:56:32.905125   21772 command_runner.go:130] > # enable_nri = false
	I0408 22:56:32.905129   21772 command_runner.go:130] > # NRI socket to listen on.
	I0408 22:56:32.905136   21772 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0408 22:56:32.905139   21772 command_runner.go:130] > # NRI plugin directory to use.
	I0408 22:56:32.905144   21772 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0408 22:56:32.905148   21772 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0408 22:56:32.905155   21772 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0408 22:56:32.905164   21772 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0408 22:56:32.905171   21772 command_runner.go:130] > # nri_disable_connections = false
	I0408 22:56:32.905175   21772 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0408 22:56:32.905182   21772 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0408 22:56:32.905186   21772 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0408 22:56:32.905193   21772 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0408 22:56:32.905199   21772 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0408 22:56:32.905204   21772 command_runner.go:130] > [crio.stats]
	I0408 22:56:32.905210   21772 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0408 22:56:32.905217   21772 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0408 22:56:32.905223   21772 command_runner.go:130] > # stats_collection_period = 0
	I0408 22:56:32.905256   21772 command_runner.go:130] ! time="2025-04-08 22:56:32.868436253Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0408 22:56:32.905274   21772 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0408 22:56:32.905342   21772 cni.go:84] Creating CNI manager for ""
	I0408 22:56:32.905354   21772 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 22:56:32.905364   21772 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 22:56:32.905388   21772 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.234 APIServerPort:8441 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-546336 NodeName:functional-546336 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.234"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.234 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 22:56:32.905493   21772 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.234
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-546336"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.234"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.234"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 22:56:32.905580   21772 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0408 22:56:32.914548   21772 command_runner.go:130] > kubeadm
	I0408 22:56:32.914564   21772 command_runner.go:130] > kubectl
	I0408 22:56:32.914568   21772 command_runner.go:130] > kubelet
	I0408 22:56:32.914646   21772 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 22:56:32.914718   21772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 22:56:32.923150   21772 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0408 22:56:32.938212   21772 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 22:56:32.953395   21772 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I0408 22:56:32.968282   21772 ssh_runner.go:195] Run: grep 192.168.39.234	control-plane.minikube.internal$ /etc/hosts
	I0408 22:56:32.971857   21772 command_runner.go:130] > 192.168.39.234	control-plane.minikube.internal
	I0408 22:56:32.971923   21772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 22:56:33.097315   21772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 22:56:33.112048   21772 certs.go:68] Setting up /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336 for IP: 192.168.39.234
	I0408 22:56:33.112066   21772 certs.go:194] generating shared ca certs ...
	I0408 22:56:33.112083   21772 certs.go:226] acquiring lock for ca certs: {Name:mk0d455aae85017ac942481bbc1202ccedea144f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 22:56:33.112251   21772 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.key
	I0408 22:56:33.112294   21772 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.key
	I0408 22:56:33.112308   21772 certs.go:256] generating profile certs ...
	I0408 22:56:33.112383   21772 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/client.key
	I0408 22:56:33.112451   21772 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/apiserver.key.848fae18
	I0408 22:56:33.112486   21772 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/proxy-client.key
	I0408 22:56:33.112495   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0408 22:56:33.112506   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0408 22:56:33.112517   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0408 22:56:33.112526   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0408 22:56:33.112540   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0408 22:56:33.112552   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0408 22:56:33.112561   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0408 22:56:33.112572   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0408 22:56:33.112624   21772 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314.pem (1338 bytes)
	W0408 22:56:33.112665   21772 certs.go:480] ignoring /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314_empty.pem, impossibly tiny 0 bytes
	I0408 22:56:33.112678   21772 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 22:56:33.112704   21772 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem (1082 bytes)
	I0408 22:56:33.112735   21772 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem (1123 bytes)
	I0408 22:56:33.112774   21772 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem (1675 bytes)
	I0408 22:56:33.112819   21772 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem (1708 bytes)
	I0408 22:56:33.112860   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem -> /usr/share/ca-certificates/163142.pem
	I0408 22:56:33.112879   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0408 22:56:33.112897   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314.pem -> /usr/share/ca-certificates/16314.pem
	I0408 22:56:33.113475   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 22:56:33.137877   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 22:56:33.159070   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 22:56:33.185298   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0408 22:56:33.207770   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0408 22:56:33.228856   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0408 22:56:33.251027   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 22:56:33.272315   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 22:56:33.294625   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem --> /usr/share/ca-certificates/163142.pem (1708 bytes)
	I0408 22:56:33.316217   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 22:56:33.337786   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314.pem --> /usr/share/ca-certificates/16314.pem (1338 bytes)
	I0408 22:56:33.358722   21772 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0408 22:56:33.373131   21772 ssh_runner.go:195] Run: openssl version
	I0408 22:56:33.378702   21772 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0408 22:56:33.378755   21772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163142.pem && ln -fs /usr/share/ca-certificates/163142.pem /etc/ssl/certs/163142.pem"
	I0408 22:56:33.388262   21772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163142.pem
	I0408 22:56:33.392059   21772 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr  8 22:53 /usr/share/ca-certificates/163142.pem
	I0408 22:56:33.392090   21772 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 22:53 /usr/share/ca-certificates/163142.pem
	I0408 22:56:33.392135   21772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163142.pem
	I0408 22:56:33.397236   21772 command_runner.go:130] > 3ec20f2e
	I0408 22:56:33.397295   21772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163142.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 22:56:33.405382   21772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 22:56:33.414578   21772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 22:56:33.418346   21772 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr  8 22:46 /usr/share/ca-certificates/minikubeCA.pem
	I0408 22:56:33.418448   21772 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 22:46 /usr/share/ca-certificates/minikubeCA.pem
	I0408 22:56:33.418490   21772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 22:56:33.423400   21772 command_runner.go:130] > b5213941
	I0408 22:56:33.423452   21772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 22:56:33.431557   21772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16314.pem && ln -fs /usr/share/ca-certificates/16314.pem /etc/ssl/certs/16314.pem"
	I0408 22:56:33.442046   21772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16314.pem
	I0408 22:56:33.446095   21772 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr  8 22:53 /usr/share/ca-certificates/16314.pem
	I0408 22:56:33.446156   21772 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 22:53 /usr/share/ca-certificates/16314.pem
	I0408 22:56:33.446198   21772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16314.pem
	I0408 22:56:33.451257   21772 command_runner.go:130] > 51391683
	I0408 22:56:33.451490   21772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16314.pem /etc/ssl/certs/51391683.0"
	I0408 22:56:33.460149   21772 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 22:56:33.463927   21772 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 22:56:33.463942   21772 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0408 22:56:33.463948   21772 command_runner.go:130] > Device: 253,1	Inode: 7338542     Links: 1
	I0408 22:56:33.463973   21772 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0408 22:56:33.463986   21772 command_runner.go:130] > Access: 2025-04-08 22:54:04.578144403 +0000
	I0408 22:56:33.463994   21772 command_runner.go:130] > Modify: 2025-04-08 22:54:04.578144403 +0000
	I0408 22:56:33.464003   21772 command_runner.go:130] > Change: 2025-04-08 22:54:04.578144403 +0000
	I0408 22:56:33.464008   21772 command_runner.go:130] >  Birth: 2025-04-08 22:54:04.578144403 +0000
	I0408 22:56:33.464063   21772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 22:56:33.469050   21772 command_runner.go:130] > Certificate will not expire
	I0408 22:56:33.469263   21772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 22:56:33.474068   21772 command_runner.go:130] > Certificate will not expire
	I0408 22:56:33.474186   21772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 22:56:33.478955   21772 command_runner.go:130] > Certificate will not expire
	I0408 22:56:33.479120   21772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 22:56:33.484075   21772 command_runner.go:130] > Certificate will not expire
	I0408 22:56:33.484130   21772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 22:56:33.488910   21772 command_runner.go:130] > Certificate will not expire
	I0408 22:56:33.488951   21772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 22:56:33.493716   21772 command_runner.go:130] > Certificate will not expire
	I0408 22:56:33.493900   21772 kubeadm.go:392] StartCluster: {Name:functional-546336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-5463
36 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 22:56:33.493993   21772 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 22:56:33.494051   21772 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 22:56:33.531075   21772 command_runner.go:130] > f5383886ea44f313131ed72cfa49949bd8b5d2f4873b4e32d81e980ce2940fd0
	I0408 22:56:33.531123   21772 command_runner.go:130] > c7dc125c272a5b82818d376a354500ce1464ae56dbc755c0a0779f8c284bfc5c
	I0408 22:56:33.531134   21772 command_runner.go:130] > 0b6b87627794e032e13a615d5ea5b991ffef3843bb9ae6e9f153041eb733d782
	I0408 22:56:33.531145   21772 command_runner.go:130] > d19ba2e208f70c1259cbd17e3566fbc44b8b4d173fc3026591fa8cea6fad11ec
	I0408 22:56:33.531154   21772 command_runner.go:130] > a02e7488bb5d2cf6fe89c9af4932fa61408d59b64f5415e68dd23aef1e5f092c
	I0408 22:56:33.531170   21772 command_runner.go:130] > e50303177ff5685821e81bebd1db71a63709a33fc7ff89cb111d923979605c70
	I0408 22:56:33.531180   21772 command_runner.go:130] > d31c5cb795e76e9631c155d0b7e96672025f714ef26b9972bd87e759a40f7e4d
	I0408 22:56:33.531194   21772 command_runner.go:130] > 090c0b802b3a3a27f9446836815daacc48a4cfa1ed0b54043325e0eada99d664
	I0408 22:56:33.531207   21772 command_runner.go:130] > f5685f897beb5418ec57fb5f80ab70b0ffe4b406ecf635a6249b528e65cabfc4
	I0408 22:56:33.531221   21772 command_runner.go:130] > 31aa14fb57438b0d736b005aed16b4fb5438a2d6ce4af59f042b06d0271bcaa5
	I0408 22:56:33.531245   21772 cri.go:89] found id: "f5383886ea44f313131ed72cfa49949bd8b5d2f4873b4e32d81e980ce2940fd0"
	I0408 22:56:33.531257   21772 cri.go:89] found id: "c7dc125c272a5b82818d376a354500ce1464ae56dbc755c0a0779f8c284bfc5c"
	I0408 22:56:33.531266   21772 cri.go:89] found id: "0b6b87627794e032e13a615d5ea5b991ffef3843bb9ae6e9f153041eb733d782"
	I0408 22:56:33.531275   21772 cri.go:89] found id: "d19ba2e208f70c1259cbd17e3566fbc44b8b4d173fc3026591fa8cea6fad11ec"
	I0408 22:56:33.531284   21772 cri.go:89] found id: "a02e7488bb5d2cf6fe89c9af4932fa61408d59b64f5415e68dd23aef1e5f092c"
	I0408 22:56:33.531294   21772 cri.go:89] found id: "e50303177ff5685821e81bebd1db71a63709a33fc7ff89cb111d923979605c70"
	I0408 22:56:33.531302   21772 cri.go:89] found id: "d31c5cb795e76e9631c155d0b7e96672025f714ef26b9972bd87e759a40f7e4d"
	I0408 22:56:33.531308   21772 cri.go:89] found id: "090c0b802b3a3a27f9446836815daacc48a4cfa1ed0b54043325e0eada99d664"
	I0408 22:56:33.531312   21772 cri.go:89] found id: "f5685f897beb5418ec57fb5f80ab70b0ffe4b406ecf635a6249b528e65cabfc4"
	I0408 22:56:33.531318   21772 cri.go:89] found id: "31aa14fb57438b0d736b005aed16b4fb5438a2d6ce4af59f042b06d0271bcaa5"
	I0408 22:56:33.531323   21772 cri.go:89] found id: ""
	I0408 22:56:33.531374   21772 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-546336 -n functional-546336
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-546336 -n functional-546336: exit status 2 (213.854844ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-546336" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (336.28s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (336.2s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-546336 kubectl -- --context functional-546336 get pods
functional_test.go:733: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-546336 kubectl -- --context functional-546336 get pods: exit status 1 (92.628839ms)

                                                
                                                
** stderr ** 
	E0408 23:20:07.190627   28035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.234:8441/api?timeout=32s\": dial tcp 192.168.39.234:8441: connect: connection refused"
	E0408 23:20:07.192236   28035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.234:8441/api?timeout=32s\": dial tcp 192.168.39.234:8441: connect: connection refused"
	E0408 23:20:07.193683   28035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.234:8441/api?timeout=32s\": dial tcp 192.168.39.234:8441: connect: connection refused"
	E0408 23:20:07.195174   28035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.234:8441/api?timeout=32s\": dial tcp 192.168.39.234:8441: connect: connection refused"
	E0408 23:20:07.196641   28035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.234:8441/api?timeout=32s\": dial tcp 192.168.39.234:8441: connect: connection refused"
	The connection to the server 192.168.39.234:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:736: failed to get pods. args "out/minikube-linux-amd64 -p functional-546336 kubectl -- --context functional-546336 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-546336 -n functional-546336
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-546336 -n functional-546336: exit status 2 (209.313606ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-546336 logs -n 25
E0408 23:22:57.940033   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-546336 logs -n 25: (5m35.62661796s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                    Args                     |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| pause   | nospam-715453 --log_dir                     | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 pause                    |                   |         |         |                     |                     |
	| unpause | nospam-715453 --log_dir                     | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 unpause                  |                   |         |         |                     |                     |
	| unpause | nospam-715453 --log_dir                     | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 unpause                  |                   |         |         |                     |                     |
	| unpause | nospam-715453 --log_dir                     | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 unpause                  |                   |         |         |                     |                     |
	| stop    | nospam-715453 --log_dir                     | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 stop                     |                   |         |         |                     |                     |
	| stop    | nospam-715453 --log_dir                     | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 stop                     |                   |         |         |                     |                     |
	| stop    | nospam-715453 --log_dir                     | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 stop                     |                   |         |         |                     |                     |
	| delete  | -p nospam-715453                            | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	| start   | -p functional-546336                        | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:54 UTC |
	|         | --memory=4000                               |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                       |                   |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                    |                   |         |         |                     |                     |
	|         | --container-runtime=crio                    |                   |         |         |                     |                     |
	| start   | -p functional-546336                        | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 22:54 UTC |                     |
	|         | --alsologtostderr -v=8                      |                   |         |         |                     |                     |
	| cache   | functional-546336 cache add                 | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:19 UTC | 08 Apr 25 23:20 UTC |
	|         | registry.k8s.io/pause:3.1                   |                   |         |         |                     |                     |
	| cache   | functional-546336 cache add                 | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	|         | registry.k8s.io/pause:3.3                   |                   |         |         |                     |                     |
	| cache   | functional-546336 cache add                 | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	|         | registry.k8s.io/pause:latest                |                   |         |         |                     |                     |
	| cache   | functional-546336 cache add                 | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	|         | minikube-local-cache-test:functional-546336 |                   |         |         |                     |                     |
	| cache   | functional-546336 cache delete              | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	|         | minikube-local-cache-test:functional-546336 |                   |         |         |                     |                     |
	| cache   | delete                                      | minikube          | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	|         | registry.k8s.io/pause:3.3                   |                   |         |         |                     |                     |
	| cache   | list                                        | minikube          | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	| ssh     | functional-546336 ssh sudo                  | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	|         | crictl images                               |                   |         |         |                     |                     |
	| ssh     | functional-546336                           | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	|         | ssh sudo crictl rmi                         |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                |                   |         |         |                     |                     |
	| ssh     | functional-546336 ssh                       | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC |                     |
	|         | sudo crictl inspecti                        |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                |                   |         |         |                     |                     |
	| cache   | functional-546336 cache reload              | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	| ssh     | functional-546336 ssh                       | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	|         | sudo crictl inspecti                        |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                |                   |         |         |                     |                     |
	| cache   | delete                                      | minikube          | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	|         | registry.k8s.io/pause:3.1                   |                   |         |         |                     |                     |
	| cache   | delete                                      | minikube          | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	|         | registry.k8s.io/pause:latest                |                   |         |         |                     |                     |
	| kubectl | functional-546336 kubectl --                | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC |                     |
	|         | --context functional-546336                 |                   |         |         |                     |                     |
	|         | get pods                                    |                   |         |         |                     |                     |
	|---------|---------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/08 22:54:53
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 22:54:53.750429   21772 out.go:345] Setting OutFile to fd 1 ...
	I0408 22:54:53.750673   21772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 22:54:53.750778   21772 out.go:358] Setting ErrFile to fd 2...
	I0408 22:54:53.750790   21772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 22:54:53.751041   21772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20501-9125/.minikube/bin
	I0408 22:54:53.751600   21772 out.go:352] Setting JSON to false
	I0408 22:54:53.752542   21772 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2239,"bootTime":1744150655,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 22:54:53.752628   21772 start.go:139] virtualization: kvm guest
	I0408 22:54:53.754529   21772 out.go:177] * [functional-546336] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 22:54:53.755700   21772 out.go:177]   - MINIKUBE_LOCATION=20501
	I0408 22:54:53.755700   21772 notify.go:220] Checking for updates...
	I0408 22:54:53.757645   21772 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 22:54:53.758881   21772 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20501-9125/kubeconfig
	I0408 22:54:53.760110   21772 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20501-9125/.minikube
	I0408 22:54:53.761221   21772 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 22:54:53.762262   21772 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 22:54:53.764007   21772 config.go:182] Loaded profile config "functional-546336": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 22:54:53.764090   21772 driver.go:404] Setting default libvirt URI to qemu:///system
	I0408 22:54:53.764531   21772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:54:53.764591   21772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:54:53.780528   21772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37317
	I0408 22:54:53.780962   21772 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:54:53.781388   21772 main.go:141] libmachine: Using API Version  1
	I0408 22:54:53.781409   21772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:54:53.781752   21772 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:54:53.781914   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:54:53.819375   21772 out.go:177] * Using the kvm2 driver based on existing profile
	I0408 22:54:53.820528   21772 start.go:297] selected driver: kvm2
	I0408 22:54:53.820538   21772 start.go:901] validating driver "kvm2" against &{Name:functional-546336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-546336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 22:54:53.820619   21772 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 22:54:53.820910   21772 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 22:54:53.820988   21772 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20501-9125/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 22:54:53.835403   21772 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0408 22:54:53.836289   21772 cni.go:84] Creating CNI manager for ""
	I0408 22:54:53.836343   21772 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 22:54:53.836403   21772 start.go:340] cluster config:
	{Name:functional-546336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-546336 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 22:54:53.836507   21772 iso.go:125] acquiring lock: {Name:mk618477bad490b102618c53c9c8c6b34f33ce81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 22:54:53.838584   21772 out.go:177] * Starting "functional-546336" primary control-plane node in "functional-546336" cluster
	I0408 22:54:53.839517   21772 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0408 22:54:53.839549   21772 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20501-9125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0408 22:54:53.839557   21772 cache.go:56] Caching tarball of preloaded images
	I0408 22:54:53.839620   21772 preload.go:172] Found /home/jenkins/minikube-integration/20501-9125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 22:54:53.839629   21772 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0408 22:54:53.839708   21772 profile.go:143] Saving config to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/config.json ...
	I0408 22:54:53.839890   21772 start.go:360] acquireMachinesLock for functional-546336: {Name:mke7be7b51cfddf557a39ecf6493fff6a1168ec9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 22:54:53.839934   21772 start.go:364] duration metric: took 24.616µs to acquireMachinesLock for "functional-546336"
	I0408 22:54:53.839951   21772 start.go:96] Skipping create...Using existing machine configuration
	I0408 22:54:53.839957   21772 fix.go:54] fixHost starting: 
	I0408 22:54:53.840198   21772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:54:53.840227   21772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:54:53.853842   21772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36351
	I0408 22:54:53.854248   21772 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:54:53.854642   21772 main.go:141] libmachine: Using API Version  1
	I0408 22:54:53.854660   21772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:54:53.854972   21772 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:54:53.855161   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:54:53.855314   21772 main.go:141] libmachine: (functional-546336) Calling .GetState
	I0408 22:54:53.856978   21772 fix.go:112] recreateIfNeeded on functional-546336: state=Running err=<nil>
	W0408 22:54:53.856995   21772 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 22:54:53.858448   21772 out.go:177] * Updating the running kvm2 "functional-546336" VM ...
	I0408 22:54:53.859370   21772 machine.go:93] provisionDockerMachine start ...
	I0408 22:54:53.859389   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:54:53.859573   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:54:53.861808   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:53.862195   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:53.862223   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:53.862331   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:54:53.862495   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:53.862642   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:53.862769   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:54:53.862913   21772 main.go:141] libmachine: Using SSH client type: native
	I0408 22:54:53.863111   21772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0408 22:54:53.863123   21772 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 22:54:53.975743   21772 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-546336
	
	I0408 22:54:53.975774   21772 main.go:141] libmachine: (functional-546336) Calling .GetMachineName
	I0408 22:54:53.976060   21772 buildroot.go:166] provisioning hostname "functional-546336"
	I0408 22:54:53.976090   21772 main.go:141] libmachine: (functional-546336) Calling .GetMachineName
	I0408 22:54:53.976275   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:54:53.978794   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:53.979136   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:53.979155   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:53.979343   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:54:53.979538   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:53.979686   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:53.979818   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:54:53.979975   21772 main.go:141] libmachine: Using SSH client type: native
	I0408 22:54:53.980186   21772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0408 22:54:53.980207   21772 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-546336 && echo "functional-546336" | sudo tee /etc/hostname
	I0408 22:54:54.107226   21772 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-546336
	
	I0408 22:54:54.107256   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:54:54.110121   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.110402   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:54.110442   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.110575   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:54:54.110737   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:54.110870   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:54.110984   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:54:54.111111   21772 main.go:141] libmachine: Using SSH client type: native
	I0408 22:54:54.111332   21772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0408 22:54:54.111355   21772 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-546336' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-546336/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-546336' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 22:54:54.224292   21772 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 22:54:54.224321   21772 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20501-9125/.minikube CaCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20501-9125/.minikube}
	I0408 22:54:54.224341   21772 buildroot.go:174] setting up certificates
	I0408 22:54:54.224352   21772 provision.go:84] configureAuth start
	I0408 22:54:54.224363   21772 main.go:141] libmachine: (functional-546336) Calling .GetMachineName
	I0408 22:54:54.224632   21772 main.go:141] libmachine: (functional-546336) Calling .GetIP
	I0408 22:54:54.227055   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.227343   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:54.227372   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.227496   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:54:54.229707   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.230025   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:54.230063   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.230204   21772 provision.go:143] copyHostCerts
	I0408 22:54:54.230228   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem
	I0408 22:54:54.230253   21772 exec_runner.go:144] found /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem, removing ...
	I0408 22:54:54.230267   21772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem
	I0408 22:54:54.230331   21772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem (1082 bytes)
	I0408 22:54:54.230397   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem
	I0408 22:54:54.230414   21772 exec_runner.go:144] found /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem, removing ...
	I0408 22:54:54.230421   21772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem
	I0408 22:54:54.230442   21772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem (1123 bytes)
	I0408 22:54:54.230555   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem
	I0408 22:54:54.230580   21772 exec_runner.go:144] found /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem, removing ...
	I0408 22:54:54.230584   21772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem
	I0408 22:54:54.230614   21772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem (1675 bytes)
	I0408 22:54:54.230663   21772 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem org=jenkins.functional-546336 san=[127.0.0.1 192.168.39.234 functional-546336 localhost minikube]
	I0408 22:54:54.377433   21772 provision.go:177] copyRemoteCerts
	I0408 22:54:54.377494   21772 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 22:54:54.377516   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:54:54.379910   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.380186   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:54.380208   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.380353   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:54:54.380512   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:54.380651   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:54:54.380759   21772 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/functional-546336/id_rsa Username:docker}
	I0408 22:54:54.469346   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0408 22:54:54.469406   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 22:54:54.492119   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0408 22:54:54.492170   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0408 22:54:54.515795   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0408 22:54:54.515854   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0408 22:54:54.538157   21772 provision.go:87] duration metric: took 313.794377ms to configureAuth
	I0408 22:54:54.538179   21772 buildroot.go:189] setting minikube options for container-runtime
	I0408 22:54:54.538348   21772 config.go:182] Loaded profile config "functional-546336": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 22:54:54.538415   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:54:54.540893   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.541189   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:54.541211   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.541388   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:54:54.541569   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:54.541794   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:54.541956   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:54:54.542154   21772 main.go:141] libmachine: Using SSH client type: native
	I0408 22:54:54.542410   21772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0408 22:54:54.542429   21772 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 22:55:00.049143   21772 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 22:55:00.049177   21772 machine.go:96] duration metric: took 6.189793928s to provisionDockerMachine
	I0408 22:55:00.049193   21772 start.go:293] postStartSetup for "functional-546336" (driver="kvm2")
	I0408 22:55:00.049216   21772 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 22:55:00.049238   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:55:00.049527   21772 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 22:55:00.049554   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:55:00.052053   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.052329   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:55:00.052357   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.052449   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:55:00.052621   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:55:00.052774   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:55:00.052915   21772 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/functional-546336/id_rsa Username:docker}
	I0408 22:55:00.137252   21772 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 22:55:00.140999   21772 command_runner.go:130] > NAME=Buildroot
	I0408 22:55:00.141018   21772 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0408 22:55:00.141022   21772 command_runner.go:130] > ID=buildroot
	I0408 22:55:00.141034   21772 command_runner.go:130] > VERSION_ID=2023.02.9
	I0408 22:55:00.141041   21772 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0408 22:55:00.141078   21772 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 22:55:00.141091   21772 filesync.go:126] Scanning /home/jenkins/minikube-integration/20501-9125/.minikube/addons for local assets ...
	I0408 22:55:00.141153   21772 filesync.go:126] Scanning /home/jenkins/minikube-integration/20501-9125/.minikube/files for local assets ...
	I0408 22:55:00.141241   21772 filesync.go:149] local asset: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem -> 163142.pem in /etc/ssl/certs
	I0408 22:55:00.141253   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem -> /etc/ssl/certs/163142.pem
	I0408 22:55:00.141327   21772 filesync.go:149] local asset: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/test/nested/copy/16314/hosts -> hosts in /etc/test/nested/copy/16314
	I0408 22:55:00.141336   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/test/nested/copy/16314/hosts -> /etc/test/nested/copy/16314/hosts
	I0408 22:55:00.141386   21772 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/16314
	I0408 22:55:00.149913   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem --> /etc/ssl/certs/163142.pem (1708 bytes)
	I0408 22:55:00.172587   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/test/nested/copy/16314/hosts --> /etc/test/nested/copy/16314/hosts (40 bytes)
	I0408 22:55:00.194320   21772 start.go:296] duration metric: took 145.104306ms for postStartSetup
	I0408 22:55:00.194353   21772 fix.go:56] duration metric: took 6.354395244s for fixHost
	I0408 22:55:00.194371   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:55:00.197105   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.197468   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:55:00.197508   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.197619   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:55:00.197806   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:55:00.197977   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:55:00.198135   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:55:00.198315   21772 main.go:141] libmachine: Using SSH client type: native
	I0408 22:55:00.198518   21772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0408 22:55:00.198529   21772 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 22:55:00.312401   21772 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744152900.293880637
	
	I0408 22:55:00.312424   21772 fix.go:216] guest clock: 1744152900.293880637
	I0408 22:55:00.312432   21772 fix.go:229] Guest: 2025-04-08 22:55:00.293880637 +0000 UTC Remote: 2025-04-08 22:55:00.194356923 +0000 UTC m=+6.478226412 (delta=99.523714ms)
	I0408 22:55:00.312463   21772 fix.go:200] guest clock delta is within tolerance: 99.523714ms
	I0408 22:55:00.312469   21772 start.go:83] releasing machines lock for "functional-546336", held for 6.472524067s
	I0408 22:55:00.312490   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:55:00.312723   21772 main.go:141] libmachine: (functional-546336) Calling .GetIP
	I0408 22:55:00.315235   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.315592   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:55:00.315620   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.315756   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:55:00.316286   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:55:00.316432   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:55:00.316535   21772 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 22:55:00.316574   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:55:00.316683   21772 ssh_runner.go:195] Run: cat /version.json
	I0408 22:55:00.316708   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:55:00.319048   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.319325   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.319354   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:55:00.319371   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.319522   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:55:00.319696   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:55:00.319776   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:55:00.319817   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.319891   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:55:00.319984   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:55:00.320037   21772 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/functional-546336/id_rsa Username:docker}
	I0408 22:55:00.320121   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:55:00.320259   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:55:00.320368   21772 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/functional-546336/id_rsa Username:docker}
	I0408 22:55:00.470604   21772 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0408 22:55:00.470683   21772 command_runner.go:130] > {"iso_version": "v1.35.0", "kicbase_version": "v0.0.45-1736763277-20236", "minikube_version": "v1.35.0", "commit": "3fb24bd87c8c8761e2515e1a9ee13835a389ed68"}
	I0408 22:55:00.470819   21772 ssh_runner.go:195] Run: systemctl --version
	I0408 22:55:00.499552   21772 command_runner.go:130] > systemd 252 (252)
	I0408 22:55:00.499604   21772 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0408 22:55:00.500041   21772 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 22:55:00.827340   21772 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0408 22:55:00.834963   21772 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0408 22:55:00.835008   21772 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 22:55:00.835072   21772 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 22:55:00.877281   21772 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0408 22:55:00.877304   21772 start.go:495] detecting cgroup driver to use...
	I0408 22:55:00.877378   21772 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 22:55:00.940318   21772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 22:55:01.008191   21772 docker.go:217] disabling cri-docker service (if available) ...
	I0408 22:55:01.008253   21772 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 22:55:01.030120   21772 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 22:55:01.062576   21772 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 22:55:01.269983   21772 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 22:55:01.496425   21772 docker.go:233] disabling docker service ...
	I0408 22:55:01.496502   21772 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 22:55:01.519064   21772 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 22:55:01.540326   21772 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 22:55:01.741595   21772 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 22:55:01.913173   21772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 22:55:01.927297   21772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 22:55:01.950625   21772 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0408 22:55:01.951000   21772 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0408 22:55:01.951058   21772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:01.962726   21772 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 22:55:01.962790   21772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:01.974651   21772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:01.985351   21772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:01.996381   21772 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 22:55:02.012061   21772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:02.024694   21772 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:02.036195   21772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:02.045483   21772 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 22:55:02.053886   21772 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0408 22:55:02.053960   21772 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 22:55:02.066815   21772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 22:55:02.213651   21772 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 22:56:32.679193   21772 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.465496752s)
	I0408 22:56:32.679231   21772 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 22:56:32.679281   21772 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 22:56:32.684914   21772 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0408 22:56:32.684956   21772 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0408 22:56:32.684981   21772 command_runner.go:130] > Device: 0,22	Inode: 1501        Links: 1
	I0408 22:56:32.684990   21772 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0408 22:56:32.684996   21772 command_runner.go:130] > Access: 2025-04-08 22:56:32.503580497 +0000
	I0408 22:56:32.685001   21772 command_runner.go:130] > Modify: 2025-04-08 22:56:32.503580497 +0000
	I0408 22:56:32.685010   21772 command_runner.go:130] > Change: 2025-04-08 22:56:32.503580497 +0000
	I0408 22:56:32.685013   21772 command_runner.go:130] >  Birth: -
	I0408 22:56:32.685205   21772 start.go:563] Will wait 60s for crictl version
	I0408 22:56:32.685262   21772 ssh_runner.go:195] Run: which crictl
	I0408 22:56:32.688828   21772 command_runner.go:130] > /usr/bin/crictl
	I0408 22:56:32.688893   21772 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 22:56:32.724970   21772 command_runner.go:130] > Version:  0.1.0
	I0408 22:56:32.724989   21772 command_runner.go:130] > RuntimeName:  cri-o
	I0408 22:56:32.724994   21772 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0408 22:56:32.724998   21772 command_runner.go:130] > RuntimeApiVersion:  v1
	I0408 22:56:32.725893   21772 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 22:56:32.725977   21772 ssh_runner.go:195] Run: crio --version
	I0408 22:56:32.752723   21772 command_runner.go:130] > crio version 1.29.1
	I0408 22:56:32.752740   21772 command_runner.go:130] > Version:        1.29.1
	I0408 22:56:32.752746   21772 command_runner.go:130] > GitCommit:      unknown
	I0408 22:56:32.752750   21772 command_runner.go:130] > GitCommitDate:  unknown
	I0408 22:56:32.752754   21772 command_runner.go:130] > GitTreeState:   clean
	I0408 22:56:32.752759   21772 command_runner.go:130] > BuildDate:      2025-01-14T08:57:58Z
	I0408 22:56:32.752763   21772 command_runner.go:130] > GoVersion:      go1.21.6
	I0408 22:56:32.752767   21772 command_runner.go:130] > Compiler:       gc
	I0408 22:56:32.752771   21772 command_runner.go:130] > Platform:       linux/amd64
	I0408 22:56:32.752775   21772 command_runner.go:130] > Linkmode:       dynamic
	I0408 22:56:32.752779   21772 command_runner.go:130] > BuildTags:      
	I0408 22:56:32.752783   21772 command_runner.go:130] >   containers_image_ostree_stub
	I0408 22:56:32.752787   21772 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0408 22:56:32.752791   21772 command_runner.go:130] >   btrfs_noversion
	I0408 22:56:32.752795   21772 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0408 22:56:32.752800   21772 command_runner.go:130] >   libdm_no_deferred_remove
	I0408 22:56:32.752804   21772 command_runner.go:130] >   seccomp
	I0408 22:56:32.752810   21772 command_runner.go:130] > LDFlags:          unknown
	I0408 22:56:32.752814   21772 command_runner.go:130] > SeccompEnabled:   true
	I0408 22:56:32.752818   21772 command_runner.go:130] > AppArmorEnabled:  false
	I0408 22:56:32.753859   21772 ssh_runner.go:195] Run: crio --version
	I0408 22:56:32.778913   21772 command_runner.go:130] > crio version 1.29.1
	I0408 22:56:32.778948   21772 command_runner.go:130] > Version:        1.29.1
	I0408 22:56:32.778957   21772 command_runner.go:130] > GitCommit:      unknown
	I0408 22:56:32.778962   21772 command_runner.go:130] > GitCommitDate:  unknown
	I0408 22:56:32.778967   21772 command_runner.go:130] > GitTreeState:   clean
	I0408 22:56:32.778975   21772 command_runner.go:130] > BuildDate:      2025-01-14T08:57:58Z
	I0408 22:56:32.778980   21772 command_runner.go:130] > GoVersion:      go1.21.6
	I0408 22:56:32.778986   21772 command_runner.go:130] > Compiler:       gc
	I0408 22:56:32.778993   21772 command_runner.go:130] > Platform:       linux/amd64
	I0408 22:56:32.779002   21772 command_runner.go:130] > Linkmode:       dynamic
	I0408 22:56:32.779012   21772 command_runner.go:130] > BuildTags:      
	I0408 22:56:32.779020   21772 command_runner.go:130] >   containers_image_ostree_stub
	I0408 22:56:32.779030   21772 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0408 22:56:32.779037   21772 command_runner.go:130] >   btrfs_noversion
	I0408 22:56:32.779048   21772 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0408 22:56:32.779056   21772 command_runner.go:130] >   libdm_no_deferred_remove
	I0408 22:56:32.779064   21772 command_runner.go:130] >   seccomp
	I0408 22:56:32.779072   21772 command_runner.go:130] > LDFlags:          unknown
	I0408 22:56:32.779080   21772 command_runner.go:130] > SeccompEnabled:   true
	I0408 22:56:32.779090   21772 command_runner.go:130] > AppArmorEnabled:  false
	I0408 22:56:32.780946   21772 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0408 22:56:32.782109   21772 main.go:141] libmachine: (functional-546336) Calling .GetIP
	I0408 22:56:32.785040   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:56:32.785454   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:56:32.785486   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:56:32.785755   21772 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 22:56:32.789792   21772 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0408 22:56:32.790053   21772 kubeadm.go:883] updating cluster {Name:functional-546336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-5
46336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 22:56:32.790145   21772 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0408 22:56:32.790182   21772 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 22:56:32.827503   21772 command_runner.go:130] > {
	I0408 22:56:32.827524   21772 command_runner.go:130] >   "images": [
	I0408 22:56:32.827528   21772 command_runner.go:130] >     {
	I0408 22:56:32.827537   21772 command_runner.go:130] >       "id": "d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56",
	I0408 22:56:32.827541   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.827547   21772 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241212-9f82dd49"
	I0408 22:56:32.827550   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827554   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.827561   21772 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26",
	I0408 22:56:32.827568   21772 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40"
	I0408 22:56:32.827572   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827576   21772 command_runner.go:130] >       "size": "95714353",
	I0408 22:56:32.827579   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.827583   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.827593   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.827600   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.827603   21772 command_runner.go:130] >     },
	I0408 22:56:32.827606   21772 command_runner.go:130] >     {
	I0408 22:56:32.827611   21772 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0408 22:56:32.827614   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.827620   21772 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0408 22:56:32.827624   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827627   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.827635   21772 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0408 22:56:32.827645   21772 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0408 22:56:32.827649   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827657   21772 command_runner.go:130] >       "size": "31470524",
	I0408 22:56:32.827663   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.827667   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.827670   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.827674   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.827677   21772 command_runner.go:130] >     },
	I0408 22:56:32.827681   21772 command_runner.go:130] >     {
	I0408 22:56:32.827689   21772 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0408 22:56:32.827692   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.827697   21772 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0408 22:56:32.827703   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827706   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.827713   21772 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0408 22:56:32.827720   21772 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0408 22:56:32.827724   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827727   21772 command_runner.go:130] >       "size": "63273227",
	I0408 22:56:32.827731   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.827737   21772 command_runner.go:130] >       "username": "nonroot",
	I0408 22:56:32.827740   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.827754   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.827759   21772 command_runner.go:130] >     },
	I0408 22:56:32.827766   21772 command_runner.go:130] >     {
	I0408 22:56:32.827773   21772 command_runner.go:130] >       "id": "a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc",
	I0408 22:56:32.827777   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.827782   21772 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.16-0"
	I0408 22:56:32.827785   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827791   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.827798   21772 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990",
	I0408 22:56:32.827811   21772 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"
	I0408 22:56:32.827816   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827820   21772 command_runner.go:130] >       "size": "151021823",
	I0408 22:56:32.827824   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.827830   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.827833   21772 command_runner.go:130] >       },
	I0408 22:56:32.827837   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.827840   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.827844   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.827846   21772 command_runner.go:130] >     },
	I0408 22:56:32.827850   21772 command_runner.go:130] >     {
	I0408 22:56:32.827858   21772 command_runner.go:130] >       "id": "85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef",
	I0408 22:56:32.827874   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.827882   21772 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.32.2"
	I0408 22:56:32.827890   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827896   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.827908   21772 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d",
	I0408 22:56:32.827916   21772 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"
	I0408 22:56:32.827922   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827925   21772 command_runner.go:130] >       "size": "98055648",
	I0408 22:56:32.827929   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.827932   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.827936   21772 command_runner.go:130] >       },
	I0408 22:56:32.827949   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.827954   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.827958   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.827966   21772 command_runner.go:130] >     },
	I0408 22:56:32.827970   21772 command_runner.go:130] >     {
	I0408 22:56:32.827976   21772 command_runner.go:130] >       "id": "b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389",
	I0408 22:56:32.827982   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.827987   21772 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.32.2"
	I0408 22:56:32.827993   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827996   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.828003   21772 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5",
	I0408 22:56:32.828013   21772 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90"
	I0408 22:56:32.828019   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828022   21772 command_runner.go:130] >       "size": "90793286",
	I0408 22:56:32.828026   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.828029   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.828033   21772 command_runner.go:130] >       },
	I0408 22:56:32.828036   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.828043   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.828046   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.828049   21772 command_runner.go:130] >     },
	I0408 22:56:32.828052   21772 command_runner.go:130] >     {
	I0408 22:56:32.828058   21772 command_runner.go:130] >       "id": "f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5",
	I0408 22:56:32.828064   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.828069   21772 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.32.2"
	I0408 22:56:32.828074   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828078   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.828085   21772 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d",
	I0408 22:56:32.828094   21772 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d"
	I0408 22:56:32.828097   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828102   21772 command_runner.go:130] >       "size": "95271321",
	I0408 22:56:32.828108   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.828111   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.828115   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.828119   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.828124   21772 command_runner.go:130] >     },
	I0408 22:56:32.828131   21772 command_runner.go:130] >     {
	I0408 22:56:32.828140   21772 command_runner.go:130] >       "id": "d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d",
	I0408 22:56:32.828144   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.828150   21772 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.32.2"
	I0408 22:56:32.828159   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828165   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.828207   21772 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76",
	I0408 22:56:32.828220   21772 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b"
	I0408 22:56:32.828223   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828227   21772 command_runner.go:130] >       "size": "70653254",
	I0408 22:56:32.828230   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.828233   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.828236   21772 command_runner.go:130] >       },
	I0408 22:56:32.828239   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.828243   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.828247   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.828250   21772 command_runner.go:130] >     },
	I0408 22:56:32.828253   21772 command_runner.go:130] >     {
	I0408 22:56:32.828259   21772 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0408 22:56:32.828265   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.828269   21772 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0408 22:56:32.828272   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828276   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.828283   21772 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0408 22:56:32.828292   21772 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0408 22:56:32.828295   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828298   21772 command_runner.go:130] >       "size": "742080",
	I0408 22:56:32.828302   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.828305   21772 command_runner.go:130] >         "value": "65535"
	I0408 22:56:32.828308   21772 command_runner.go:130] >       },
	I0408 22:56:32.828312   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.828318   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.828324   21772 command_runner.go:130] >       "pinned": true
	I0408 22:56:32.828331   21772 command_runner.go:130] >     }
	I0408 22:56:32.828334   21772 command_runner.go:130] >   ]
	I0408 22:56:32.828337   21772 command_runner.go:130] > }
	I0408 22:56:32.829120   21772 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 22:56:32.829135   21772 crio.go:433] Images already preloaded, skipping extraction
	I0408 22:56:32.829174   21772 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 22:56:32.860598   21772 command_runner.go:130] > {
	I0408 22:56:32.860616   21772 command_runner.go:130] >   "images": [
	I0408 22:56:32.860620   21772 command_runner.go:130] >     {
	I0408 22:56:32.860628   21772 command_runner.go:130] >       "id": "d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56",
	I0408 22:56:32.860632   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.860637   21772 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241212-9f82dd49"
	I0408 22:56:32.860641   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860645   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.860658   21772 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26",
	I0408 22:56:32.860666   21772 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40"
	I0408 22:56:32.860669   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860674   21772 command_runner.go:130] >       "size": "95714353",
	I0408 22:56:32.860677   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.860682   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.860690   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.860694   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.860699   21772 command_runner.go:130] >     },
	I0408 22:56:32.860702   21772 command_runner.go:130] >     {
	I0408 22:56:32.860708   21772 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0408 22:56:32.860712   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.860719   21772 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0408 22:56:32.860722   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860727   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.860734   21772 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0408 22:56:32.860742   21772 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0408 22:56:32.860746   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860752   21772 command_runner.go:130] >       "size": "31470524",
	I0408 22:56:32.860757   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.860761   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.860764   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.860768   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.860771   21772 command_runner.go:130] >     },
	I0408 22:56:32.860774   21772 command_runner.go:130] >     {
	I0408 22:56:32.860780   21772 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0408 22:56:32.860784   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.860789   21772 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0408 22:56:32.860793   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860797   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.860805   21772 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0408 22:56:32.860814   21772 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0408 22:56:32.860818   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860828   21772 command_runner.go:130] >       "size": "63273227",
	I0408 22:56:32.860834   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.860838   21772 command_runner.go:130] >       "username": "nonroot",
	I0408 22:56:32.860842   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.860848   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.860851   21772 command_runner.go:130] >     },
	I0408 22:56:32.860854   21772 command_runner.go:130] >     {
	I0408 22:56:32.860860   21772 command_runner.go:130] >       "id": "a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc",
	I0408 22:56:32.860866   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.860871   21772 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.16-0"
	I0408 22:56:32.860878   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860882   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.860891   21772 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990",
	I0408 22:56:32.860905   21772 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"
	I0408 22:56:32.860911   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860915   21772 command_runner.go:130] >       "size": "151021823",
	I0408 22:56:32.860921   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.860925   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.860931   21772 command_runner.go:130] >       },
	I0408 22:56:32.860946   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.860953   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.860957   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.860962   21772 command_runner.go:130] >     },
	I0408 22:56:32.860965   21772 command_runner.go:130] >     {
	I0408 22:56:32.860971   21772 command_runner.go:130] >       "id": "85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef",
	I0408 22:56:32.860977   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.860982   21772 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.32.2"
	I0408 22:56:32.860985   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860990   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.860997   21772 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d",
	I0408 22:56:32.861007   21772 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"
	I0408 22:56:32.861010   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861014   21772 command_runner.go:130] >       "size": "98055648",
	I0408 22:56:32.861024   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.861030   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.861033   21772 command_runner.go:130] >       },
	I0408 22:56:32.861037   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.861043   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.861046   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.861049   21772 command_runner.go:130] >     },
	I0408 22:56:32.861052   21772 command_runner.go:130] >     {
	I0408 22:56:32.861060   21772 command_runner.go:130] >       "id": "b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389",
	I0408 22:56:32.861064   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.861071   21772 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.32.2"
	I0408 22:56:32.861076   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861082   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.861090   21772 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5",
	I0408 22:56:32.861099   21772 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90"
	I0408 22:56:32.861103   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861106   21772 command_runner.go:130] >       "size": "90793286",
	I0408 22:56:32.861110   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.861114   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.861116   21772 command_runner.go:130] >       },
	I0408 22:56:32.861120   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.861126   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.861130   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.861133   21772 command_runner.go:130] >     },
	I0408 22:56:32.861136   21772 command_runner.go:130] >     {
	I0408 22:56:32.861143   21772 command_runner.go:130] >       "id": "f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5",
	I0408 22:56:32.861149   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.861153   21772 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.32.2"
	I0408 22:56:32.861158   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861162   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.861169   21772 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d",
	I0408 22:56:32.861178   21772 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d"
	I0408 22:56:32.861182   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861190   21772 command_runner.go:130] >       "size": "95271321",
	I0408 22:56:32.861196   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.861200   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.861204   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.861207   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.861210   21772 command_runner.go:130] >     },
	I0408 22:56:32.861213   21772 command_runner.go:130] >     {
	I0408 22:56:32.861219   21772 command_runner.go:130] >       "id": "d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d",
	I0408 22:56:32.861224   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.861229   21772 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.32.2"
	I0408 22:56:32.861234   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861238   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.861256   21772 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76",
	I0408 22:56:32.861266   21772 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b"
	I0408 22:56:32.861269   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861273   21772 command_runner.go:130] >       "size": "70653254",
	I0408 22:56:32.861275   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.861279   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.861282   21772 command_runner.go:130] >       },
	I0408 22:56:32.861286   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.861289   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.861293   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.861296   21772 command_runner.go:130] >     },
	I0408 22:56:32.861299   21772 command_runner.go:130] >     {
	I0408 22:56:32.861305   21772 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0408 22:56:32.861314   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.861319   21772 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0408 22:56:32.861322   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861325   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.861332   21772 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0408 22:56:32.861341   21772 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0408 22:56:32.861345   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861349   21772 command_runner.go:130] >       "size": "742080",
	I0408 22:56:32.861357   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.861364   21772 command_runner.go:130] >         "value": "65535"
	I0408 22:56:32.861367   21772 command_runner.go:130] >       },
	I0408 22:56:32.861370   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.861374   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.861380   21772 command_runner.go:130] >       "pinned": true
	I0408 22:56:32.861382   21772 command_runner.go:130] >     }
	I0408 22:56:32.861385   21772 command_runner.go:130] >   ]
	I0408 22:56:32.861388   21772 command_runner.go:130] > }
	I0408 22:56:32.862015   21772 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 22:56:32.862029   21772 cache_images.go:84] Images are preloaded, skipping loading
	I0408 22:56:32.862035   21772 kubeadm.go:934] updating node { 192.168.39.234 8441 v1.32.2 crio true true} ...
	I0408 22:56:32.862119   21772 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-546336 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.234
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:functional-546336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 22:56:32.862176   21772 ssh_runner.go:195] Run: crio config
	I0408 22:56:32.900028   21772 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0408 22:56:32.900049   21772 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0408 22:56:32.900055   21772 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0408 22:56:32.900058   21772 command_runner.go:130] > #
	I0408 22:56:32.900065   21772 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0408 22:56:32.900071   21772 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0408 22:56:32.900077   21772 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0408 22:56:32.900097   21772 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0408 22:56:32.900101   21772 command_runner.go:130] > # reload'.
	I0408 22:56:32.900107   21772 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0408 22:56:32.900113   21772 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0408 22:56:32.900120   21772 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0408 22:56:32.900130   21772 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0408 22:56:32.900135   21772 command_runner.go:130] > [crio]
	I0408 22:56:32.900144   21772 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0408 22:56:32.900152   21772 command_runner.go:130] > # containers images, in this directory.
	I0408 22:56:32.900158   21772 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0408 22:56:32.900171   21772 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0408 22:56:32.900182   21772 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0408 22:56:32.900190   21772 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0408 22:56:32.900199   21772 command_runner.go:130] > # imagestore = ""
	I0408 22:56:32.900205   21772 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0408 22:56:32.900213   21772 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0408 22:56:32.900221   21772 command_runner.go:130] > storage_driver = "overlay"
	I0408 22:56:32.900232   21772 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0408 22:56:32.900240   21772 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0408 22:56:32.900247   21772 command_runner.go:130] > storage_option = [
	I0408 22:56:32.900262   21772 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0408 22:56:32.900275   21772 command_runner.go:130] > ]
	I0408 22:56:32.900286   21772 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0408 22:56:32.900296   21772 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0408 22:56:32.900301   21772 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0408 22:56:32.900307   21772 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0408 22:56:32.900312   21772 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0408 22:56:32.900316   21772 command_runner.go:130] > # always happen on a node reboot
	I0408 22:56:32.900323   21772 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0408 22:56:32.900351   21772 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0408 22:56:32.900362   21772 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0408 22:56:32.900370   21772 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0408 22:56:32.900379   21772 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0408 22:56:32.900389   21772 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0408 22:56:32.900401   21772 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0408 22:56:32.900408   21772 command_runner.go:130] > # internal_wipe = true
	I0408 22:56:32.900421   21772 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0408 22:56:32.900433   21772 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0408 22:56:32.900445   21772 command_runner.go:130] > # internal_repair = false
	I0408 22:56:32.900456   21772 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0408 22:56:32.900465   21772 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0408 22:56:32.900477   21772 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0408 22:56:32.900488   21772 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0408 22:56:32.900500   21772 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0408 22:56:32.900506   21772 command_runner.go:130] > [crio.api]
	I0408 22:56:32.900514   21772 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0408 22:56:32.900524   21772 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0408 22:56:32.900532   21772 command_runner.go:130] > # IP address on which the stream server will listen.
	I0408 22:56:32.900539   21772 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0408 22:56:32.900549   21772 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0408 22:56:32.900559   21772 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0408 22:56:32.900565   21772 command_runner.go:130] > # stream_port = "0"
	I0408 22:56:32.900572   21772 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0408 22:56:32.900581   21772 command_runner.go:130] > # stream_enable_tls = false
	I0408 22:56:32.900589   21772 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0408 22:56:32.900593   21772 command_runner.go:130] > # stream_idle_timeout = ""
	I0408 22:56:32.900601   21772 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0408 22:56:32.900607   21772 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0408 22:56:32.900614   21772 command_runner.go:130] > # minutes.
	I0408 22:56:32.900620   21772 command_runner.go:130] > # stream_tls_cert = ""
	I0408 22:56:32.900631   21772 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0408 22:56:32.900649   21772 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0408 22:56:32.900658   21772 command_runner.go:130] > # stream_tls_key = ""
	I0408 22:56:32.900667   21772 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0408 22:56:32.900679   21772 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0408 22:56:32.900709   21772 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0408 22:56:32.900720   21772 command_runner.go:130] > # stream_tls_ca = ""
	I0408 22:56:32.900732   21772 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0408 22:56:32.900742   21772 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0408 22:56:32.900753   21772 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0408 22:56:32.900763   21772 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0408 22:56:32.900773   21772 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0408 22:56:32.900785   21772 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0408 22:56:32.900793   21772 command_runner.go:130] > [crio.runtime]
	I0408 22:56:32.900803   21772 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0408 22:56:32.900815   21772 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0408 22:56:32.900822   21772 command_runner.go:130] > # "nofile=1024:2048"
	I0408 22:56:32.900832   21772 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0408 22:56:32.900841   21772 command_runner.go:130] > # default_ulimits = [
	I0408 22:56:32.900847   21772 command_runner.go:130] > # ]
	I0408 22:56:32.900860   21772 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0408 22:56:32.900873   21772 command_runner.go:130] > # no_pivot = false
	I0408 22:56:32.900885   21772 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0408 22:56:32.900897   21772 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0408 22:56:32.900907   21772 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0408 22:56:32.900918   21772 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0408 22:56:32.900932   21772 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0408 22:56:32.900959   21772 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0408 22:56:32.900970   21772 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0408 22:56:32.900976   21772 command_runner.go:130] > # Cgroup setting for conmon
	I0408 22:56:32.900987   21772 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0408 22:56:32.900996   21772 command_runner.go:130] > conmon_cgroup = "pod"
	I0408 22:56:32.901006   21772 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0408 22:56:32.901017   21772 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0408 22:56:32.901030   21772 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0408 22:56:32.901038   21772 command_runner.go:130] > conmon_env = [
	I0408 22:56:32.901047   21772 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0408 22:56:32.901055   21772 command_runner.go:130] > ]
	I0408 22:56:32.901064   21772 command_runner.go:130] > # Additional environment variables to set for all the
	I0408 22:56:32.901075   21772 command_runner.go:130] > # containers. These are overridden if set in the
	I0408 22:56:32.901087   21772 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0408 22:56:32.901094   21772 command_runner.go:130] > # default_env = [
	I0408 22:56:32.901103   21772 command_runner.go:130] > # ]
	I0408 22:56:32.901111   21772 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0408 22:56:32.901125   21772 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0408 22:56:32.901134   21772 command_runner.go:130] > # selinux = false
	I0408 22:56:32.901143   21772 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0408 22:56:32.901155   21772 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0408 22:56:32.901167   21772 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0408 22:56:32.901177   21772 command_runner.go:130] > # seccomp_profile = ""
	I0408 22:56:32.901186   21772 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0408 22:56:32.901197   21772 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0408 22:56:32.901207   21772 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0408 22:56:32.901217   21772 command_runner.go:130] > # which might increase security.
	I0408 22:56:32.901225   21772 command_runner.go:130] > # This option is currently deprecated,
	I0408 22:56:32.901237   21772 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0408 22:56:32.901255   21772 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0408 22:56:32.901268   21772 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0408 22:56:32.901288   21772 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0408 22:56:32.901314   21772 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0408 22:56:32.901327   21772 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0408 22:56:32.901335   21772 command_runner.go:130] > # This option supports live configuration reload.
	I0408 22:56:32.901345   21772 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0408 22:56:32.901353   21772 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0408 22:56:32.901362   21772 command_runner.go:130] > # the cgroup blockio controller.
	I0408 22:56:32.901369   21772 command_runner.go:130] > # blockio_config_file = ""
	I0408 22:56:32.901382   21772 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0408 22:56:32.901388   21772 command_runner.go:130] > # blockio parameters.
	I0408 22:56:32.901397   21772 command_runner.go:130] > # blockio_reload = false
	I0408 22:56:32.901407   21772 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0408 22:56:32.901414   21772 command_runner.go:130] > # irqbalance daemon.
	I0408 22:56:32.901419   21772 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0408 22:56:32.901425   21772 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0408 22:56:32.901431   21772 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0408 22:56:32.901438   21772 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0408 22:56:32.901446   21772 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0408 22:56:32.901454   21772 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0408 22:56:32.901461   21772 command_runner.go:130] > # This option supports live configuration reload.
	I0408 22:56:32.901468   21772 command_runner.go:130] > # rdt_config_file = ""
	I0408 22:56:32.901476   21772 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0408 22:56:32.901483   21772 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0408 22:56:32.901522   21772 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0408 22:56:32.901531   21772 command_runner.go:130] > # separate_pull_cgroup = ""
	I0408 22:56:32.901538   21772 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0408 22:56:32.901549   21772 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0408 22:56:32.901555   21772 command_runner.go:130] > # will be added.
	I0408 22:56:32.901562   21772 command_runner.go:130] > # default_capabilities = [
	I0408 22:56:32.901571   21772 command_runner.go:130] > # 	"CHOWN",
	I0408 22:56:32.901577   21772 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0408 22:56:32.901585   21772 command_runner.go:130] > # 	"FSETID",
	I0408 22:56:32.901590   21772 command_runner.go:130] > # 	"FOWNER",
	I0408 22:56:32.901596   21772 command_runner.go:130] > # 	"SETGID",
	I0408 22:56:32.901609   21772 command_runner.go:130] > # 	"SETUID",
	I0408 22:56:32.901618   21772 command_runner.go:130] > # 	"SETPCAP",
	I0408 22:56:32.901622   21772 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0408 22:56:32.901628   21772 command_runner.go:130] > # 	"KILL",
	I0408 22:56:32.901632   21772 command_runner.go:130] > # ]
	I0408 22:56:32.901643   21772 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0408 22:56:32.901657   21772 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0408 22:56:32.901671   21772 command_runner.go:130] > # add_inheritable_capabilities = false
	I0408 22:56:32.901681   21772 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0408 22:56:32.901693   21772 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0408 22:56:32.901702   21772 command_runner.go:130] > default_sysctls = [
	I0408 22:56:32.901710   21772 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0408 22:56:32.901718   21772 command_runner.go:130] > ]
	I0408 22:56:32.901725   21772 command_runner.go:130] > # List of devices on the host that a
	I0408 22:56:32.901738   21772 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0408 22:56:32.901744   21772 command_runner.go:130] > # allowed_devices = [
	I0408 22:56:32.901753   21772 command_runner.go:130] > # 	"/dev/fuse",
	I0408 22:56:32.901759   21772 command_runner.go:130] > # ]
	I0408 22:56:32.901768   21772 command_runner.go:130] > # List of additional devices. specified as
	I0408 22:56:32.901782   21772 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0408 22:56:32.901793   21772 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0408 22:56:32.901802   21772 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0408 22:56:32.901811   21772 command_runner.go:130] > # additional_devices = [
	I0408 22:56:32.901816   21772 command_runner.go:130] > # ]
	I0408 22:56:32.901827   21772 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0408 22:56:32.901834   21772 command_runner.go:130] > # cdi_spec_dirs = [
	I0408 22:56:32.901842   21772 command_runner.go:130] > # 	"/etc/cdi",
	I0408 22:56:32.901848   21772 command_runner.go:130] > # 	"/var/run/cdi",
	I0408 22:56:32.901856   21772 command_runner.go:130] > # ]
	I0408 22:56:32.901866   21772 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0408 22:56:32.901878   21772 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0408 22:56:32.901885   21772 command_runner.go:130] > # Defaults to false.
	I0408 22:56:32.901891   21772 command_runner.go:130] > # device_ownership_from_security_context = false
	I0408 22:56:32.901909   21772 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0408 22:56:32.901922   21772 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0408 22:56:32.901928   21772 command_runner.go:130] > # hooks_dir = [
	I0408 22:56:32.901936   21772 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0408 22:56:32.901950   21772 command_runner.go:130] > # ]
	I0408 22:56:32.901959   21772 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0408 22:56:32.901970   21772 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0408 22:56:32.901979   21772 command_runner.go:130] > # its default mounts from the following two files:
	I0408 22:56:32.901990   21772 command_runner.go:130] > #
	I0408 22:56:32.902004   21772 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0408 22:56:32.902015   21772 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0408 22:56:32.902024   21772 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0408 22:56:32.902033   21772 command_runner.go:130] > #
	I0408 22:56:32.902042   21772 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0408 22:56:32.902054   21772 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0408 22:56:32.902067   21772 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0408 22:56:32.902078   21772 command_runner.go:130] > #      only add mounts it finds in this file.
	I0408 22:56:32.902083   21772 command_runner.go:130] > #
	I0408 22:56:32.902092   21772 command_runner.go:130] > # default_mounts_file = ""
	I0408 22:56:32.902103   21772 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0408 22:56:32.902115   21772 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0408 22:56:32.902125   21772 command_runner.go:130] > pids_limit = 1024
	I0408 22:56:32.902135   21772 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0408 22:56:32.902144   21772 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0408 22:56:32.902151   21772 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0408 22:56:32.902166   21772 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0408 22:56:32.902177   21772 command_runner.go:130] > # log_size_max = -1
	I0408 22:56:32.902187   21772 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0408 22:56:32.902194   21772 command_runner.go:130] > # log_to_journald = false
	I0408 22:56:32.902206   21772 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0408 22:56:32.902216   21772 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0408 22:56:32.902224   21772 command_runner.go:130] > # Path to directory for container attach sockets.
	I0408 22:56:32.902234   21772 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0408 22:56:32.902254   21772 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0408 22:56:32.902264   21772 command_runner.go:130] > # bind_mount_prefix = ""
	I0408 22:56:32.902272   21772 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0408 22:56:32.902281   21772 command_runner.go:130] > # read_only = false
	I0408 22:56:32.902290   21772 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0408 22:56:32.902303   21772 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0408 22:56:32.902311   21772 command_runner.go:130] > # live configuration reload.
	I0408 22:56:32.902315   21772 command_runner.go:130] > # log_level = "info"
	I0408 22:56:32.902325   21772 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0408 22:56:32.902334   21772 command_runner.go:130] > # This option supports live configuration reload.
	I0408 22:56:32.902343   21772 command_runner.go:130] > # log_filter = ""
	I0408 22:56:32.902352   21772 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0408 22:56:32.902366   21772 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0408 22:56:32.902373   21772 command_runner.go:130] > # separated by comma.
	I0408 22:56:32.902387   21772 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0408 22:56:32.902396   21772 command_runner.go:130] > # uid_mappings = ""
	I0408 22:56:32.902405   21772 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0408 22:56:32.902417   21772 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0408 22:56:32.902427   21772 command_runner.go:130] > # separated by comma.
	I0408 22:56:32.902442   21772 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0408 22:56:32.902450   21772 command_runner.go:130] > # gid_mappings = ""
	I0408 22:56:32.902459   21772 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0408 22:56:32.902472   21772 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0408 22:56:32.902481   21772 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0408 22:56:32.902489   21772 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0408 22:56:32.902499   21772 command_runner.go:130] > # minimum_mappable_uid = -1
	I0408 22:56:32.902508   21772 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0408 22:56:32.902521   21772 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0408 22:56:32.902533   21772 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0408 22:56:32.902545   21772 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0408 22:56:32.902554   21772 command_runner.go:130] > # minimum_mappable_gid = -1
	I0408 22:56:32.902563   21772 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0408 22:56:32.902571   21772 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0408 22:56:32.902584   21772 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0408 22:56:32.902595   21772 command_runner.go:130] > # ctr_stop_timeout = 30
	I0408 22:56:32.902608   21772 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0408 22:56:32.902619   21772 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0408 22:56:32.902629   21772 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0408 22:56:32.902637   21772 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0408 22:56:32.902646   21772 command_runner.go:130] > drop_infra_ctr = false
	I0408 22:56:32.902653   21772 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0408 22:56:32.902661   21772 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0408 22:56:32.902672   21772 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0408 22:56:32.902683   21772 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0408 22:56:32.902696   21772 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0408 22:56:32.902708   21772 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0408 22:56:32.902719   21772 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0408 22:56:32.902730   21772 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0408 22:56:32.902735   21772 command_runner.go:130] > # shared_cpuset = ""
	I0408 22:56:32.902740   21772 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0408 22:56:32.902747   21772 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0408 22:56:32.902753   21772 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0408 22:56:32.902767   21772 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0408 22:56:32.902777   21772 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0408 22:56:32.902789   21772 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0408 22:56:32.902801   21772 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0408 22:56:32.902811   21772 command_runner.go:130] > # enable_criu_support = false
	I0408 22:56:32.902820   21772 command_runner.go:130] > # Enable/disable the generation of the container,
	I0408 22:56:32.902826   21772 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0408 22:56:32.902834   21772 command_runner.go:130] > # enable_pod_events = false
	I0408 22:56:32.902844   21772 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0408 22:56:32.902857   21772 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0408 22:56:32.902867   21772 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0408 22:56:32.902873   21772 command_runner.go:130] > # default_runtime = "runc"
	I0408 22:56:32.902884   21772 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0408 22:56:32.902897   21772 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0408 22:56:32.902917   21772 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0408 22:56:32.902928   21772 command_runner.go:130] > # creation as a file is not desired either.
	I0408 22:56:32.902945   21772 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0408 22:56:32.902956   21772 command_runner.go:130] > # the hostname is being managed dynamically.
	I0408 22:56:32.902962   21772 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0408 22:56:32.902970   21772 command_runner.go:130] > # ]
	I0408 22:56:32.902983   21772 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0408 22:56:32.902993   21772 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0408 22:56:32.903002   21772 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0408 22:56:32.903013   21772 command_runner.go:130] > # Each entry in the table should follow the format:
	I0408 22:56:32.903022   21772 command_runner.go:130] > #
	I0408 22:56:32.903029   21772 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0408 22:56:32.903039   21772 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0408 22:56:32.903114   21772 command_runner.go:130] > # runtime_type = "oci"
	I0408 22:56:32.903129   21772 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0408 22:56:32.903136   21772 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0408 22:56:32.903142   21772 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0408 22:56:32.903150   21772 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0408 22:56:32.903156   21772 command_runner.go:130] > # monitor_env = []
	I0408 22:56:32.903164   21772 command_runner.go:130] > # privileged_without_host_devices = false
	I0408 22:56:32.903171   21772 command_runner.go:130] > # allowed_annotations = []
	I0408 22:56:32.903177   21772 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0408 22:56:32.903186   21772 command_runner.go:130] > # Where:
	I0408 22:56:32.903195   21772 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0408 22:56:32.903207   21772 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0408 22:56:32.903220   21772 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0408 22:56:32.903235   21772 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0408 22:56:32.903243   21772 command_runner.go:130] > #   in $PATH.
	I0408 22:56:32.903253   21772 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0408 22:56:32.903260   21772 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0408 22:56:32.903267   21772 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0408 22:56:32.903275   21772 command_runner.go:130] > #   state.
	I0408 22:56:32.903291   21772 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0408 22:56:32.903308   21772 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0408 22:56:32.903321   21772 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0408 22:56:32.903329   21772 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0408 22:56:32.903340   21772 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0408 22:56:32.903348   21772 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0408 22:56:32.903355   21772 command_runner.go:130] > #   The currently recognized values are:
	I0408 22:56:32.903368   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0408 22:56:32.903382   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0408 22:56:32.903394   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0408 22:56:32.903404   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0408 22:56:32.903418   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0408 22:56:32.903429   21772 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0408 22:56:32.903443   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0408 22:56:32.903456   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0408 22:56:32.903467   21772 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0408 22:56:32.903479   21772 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0408 22:56:32.903489   21772 command_runner.go:130] > #   deprecated option "conmon".
	I0408 22:56:32.903501   21772 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0408 22:56:32.903513   21772 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0408 22:56:32.903527   21772 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0408 22:56:32.903538   21772 command_runner.go:130] > #   should be moved to the container's cgroup
	I0408 22:56:32.903548   21772 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0408 22:56:32.903557   21772 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0408 22:56:32.903568   21772 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0408 22:56:32.903577   21772 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0408 22:56:32.903580   21772 command_runner.go:130] > #
	I0408 22:56:32.903588   21772 command_runner.go:130] > # Using the seccomp notifier feature:
	I0408 22:56:32.903595   21772 command_runner.go:130] > #
	I0408 22:56:32.903604   21772 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0408 22:56:32.903618   21772 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0408 22:56:32.903622   21772 command_runner.go:130] > #
	I0408 22:56:32.903632   21772 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0408 22:56:32.903644   21772 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0408 22:56:32.903657   21772 command_runner.go:130] > #
	I0408 22:56:32.903669   21772 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0408 22:56:32.903678   21772 command_runner.go:130] > # feature.
	I0408 22:56:32.903682   21772 command_runner.go:130] > #
	I0408 22:56:32.903694   21772 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0408 22:56:32.903706   21772 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0408 22:56:32.903718   21772 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0408 22:56:32.903728   21772 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0408 22:56:32.903739   21772 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0408 22:56:32.903747   21772 command_runner.go:130] > #
	I0408 22:56:32.903756   21772 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0408 22:56:32.903766   21772 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0408 22:56:32.903769   21772 command_runner.go:130] > #
	I0408 22:56:32.903777   21772 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0408 22:56:32.903789   21772 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0408 22:56:32.903797   21772 command_runner.go:130] > #
	I0408 22:56:32.903805   21772 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0408 22:56:32.903816   21772 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0408 22:56:32.903828   21772 command_runner.go:130] > # limitation.
	I0408 22:56:32.903839   21772 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0408 22:56:32.903846   21772 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0408 22:56:32.903850   21772 command_runner.go:130] > runtime_type = "oci"
	I0408 22:56:32.903854   21772 command_runner.go:130] > runtime_root = "/run/runc"
	I0408 22:56:32.903860   21772 command_runner.go:130] > runtime_config_path = ""
	I0408 22:56:32.903881   21772 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0408 22:56:32.903890   21772 command_runner.go:130] > monitor_cgroup = "pod"
	I0408 22:56:32.903896   21772 command_runner.go:130] > monitor_exec_cgroup = ""
	I0408 22:56:32.903905   21772 command_runner.go:130] > monitor_env = [
	I0408 22:56:32.903914   21772 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0408 22:56:32.903922   21772 command_runner.go:130] > ]
	I0408 22:56:32.903929   21772 command_runner.go:130] > privileged_without_host_devices = false
	I0408 22:56:32.903943   21772 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0408 22:56:32.903954   21772 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0408 22:56:32.903974   21772 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0408 22:56:32.903992   21772 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0408 22:56:32.904007   21772 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0408 22:56:32.904018   21772 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0408 22:56:32.904031   21772 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0408 22:56:32.904046   21772 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0408 22:56:32.904059   21772 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0408 22:56:32.904070   21772 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0408 22:56:32.904078   21772 command_runner.go:130] > # Example:
	I0408 22:56:32.904085   21772 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0408 22:56:32.904096   21772 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0408 22:56:32.904104   21772 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0408 22:56:32.904109   21772 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0408 22:56:32.904116   21772 command_runner.go:130] > # cpuset = 0
	I0408 22:56:32.904122   21772 command_runner.go:130] > # cpushares = "0-1"
	I0408 22:56:32.904131   21772 command_runner.go:130] > # Where:
	I0408 22:56:32.904138   21772 command_runner.go:130] > # The workload name is workload-type.
	I0408 22:56:32.904151   21772 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0408 22:56:32.904162   21772 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0408 22:56:32.904171   21772 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0408 22:56:32.904185   21772 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0408 22:56:32.904195   21772 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0408 22:56:32.904202   21772 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0408 22:56:32.904216   21772 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0408 22:56:32.904226   21772 command_runner.go:130] > # Default value is set to true
	I0408 22:56:32.904232   21772 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0408 22:56:32.904244   21772 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0408 22:56:32.904253   21772 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0408 22:56:32.904260   21772 command_runner.go:130] > # Default value is set to 'false'
	I0408 22:56:32.904267   21772 command_runner.go:130] > # disable_hostport_mapping = false
	I0408 22:56:32.904275   21772 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0408 22:56:32.904280   21772 command_runner.go:130] > #
	I0408 22:56:32.904288   21772 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0408 22:56:32.904307   21772 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0408 22:56:32.904322   21772 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0408 22:56:32.904335   21772 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0408 22:56:32.904349   21772 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0408 22:56:32.904357   21772 command_runner.go:130] > [crio.image]
	I0408 22:56:32.904363   21772 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0408 22:56:32.904371   21772 command_runner.go:130] > # default_transport = "docker://"
	I0408 22:56:32.904382   21772 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0408 22:56:32.904394   21772 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0408 22:56:32.904404   21772 command_runner.go:130] > # global_auth_file = ""
	I0408 22:56:32.904411   21772 command_runner.go:130] > # The image used to instantiate infra containers.
	I0408 22:56:32.904421   21772 command_runner.go:130] > # This option supports live configuration reload.
	I0408 22:56:32.904431   21772 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0408 22:56:32.904441   21772 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0408 22:56:32.904449   21772 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0408 22:56:32.904454   21772 command_runner.go:130] > # This option supports live configuration reload.
	I0408 22:56:32.904459   21772 command_runner.go:130] > # pause_image_auth_file = ""
	I0408 22:56:32.904464   21772 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0408 22:56:32.904472   21772 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0408 22:56:32.904481   21772 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0408 22:56:32.904494   21772 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0408 22:56:32.904503   21772 command_runner.go:130] > # pause_command = "/pause"
	I0408 22:56:32.904511   21772 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0408 22:56:32.904551   21772 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0408 22:56:32.904556   21772 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0408 22:56:32.904564   21772 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0408 22:56:32.904569   21772 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0408 22:56:32.904578   21772 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0408 22:56:32.904584   21772 command_runner.go:130] > # pinned_images = [
	I0408 22:56:32.904592   21772 command_runner.go:130] > # ]
	I0408 22:56:32.904600   21772 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0408 22:56:32.904607   21772 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0408 22:56:32.904615   21772 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0408 22:56:32.904629   21772 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0408 22:56:32.904642   21772 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0408 22:56:32.904651   21772 command_runner.go:130] > # signature_policy = ""
	I0408 22:56:32.904660   21772 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0408 22:56:32.904672   21772 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0408 22:56:32.904681   21772 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0408 22:56:32.904694   21772 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0408 22:56:32.904702   21772 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0408 22:56:32.904707   21772 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0408 22:56:32.904714   21772 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0408 22:56:32.904720   21772 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0408 22:56:32.904723   21772 command_runner.go:130] > # changing them here.
	I0408 22:56:32.904726   21772 command_runner.go:130] > # insecure_registries = [
	I0408 22:56:32.904729   21772 command_runner.go:130] > # ]
	I0408 22:56:32.904735   21772 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0408 22:56:32.904739   21772 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0408 22:56:32.904743   21772 command_runner.go:130] > # image_volumes = "mkdir"
	I0408 22:56:32.904747   21772 command_runner.go:130] > # Temporary directory to use for storing big files
	I0408 22:56:32.904751   21772 command_runner.go:130] > # big_files_temporary_dir = ""
	I0408 22:56:32.904756   21772 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0408 22:56:32.904760   21772 command_runner.go:130] > # CNI plugins.
	I0408 22:56:32.904763   21772 command_runner.go:130] > [crio.network]
	I0408 22:56:32.904768   21772 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0408 22:56:32.904773   21772 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0408 22:56:32.904777   21772 command_runner.go:130] > # cni_default_network = ""
	I0408 22:56:32.904782   21772 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0408 22:56:32.904786   21772 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0408 22:56:32.904791   21772 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0408 22:56:32.904794   21772 command_runner.go:130] > # plugin_dirs = [
	I0408 22:56:32.904798   21772 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0408 22:56:32.904800   21772 command_runner.go:130] > # ]
	I0408 22:56:32.904805   21772 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0408 22:56:32.904809   21772 command_runner.go:130] > [crio.metrics]
	I0408 22:56:32.904818   21772 command_runner.go:130] > # Globally enable or disable metrics support.
	I0408 22:56:32.904821   21772 command_runner.go:130] > enable_metrics = true
	I0408 22:56:32.904825   21772 command_runner.go:130] > # Specify enabled metrics collectors.
	I0408 22:56:32.904829   21772 command_runner.go:130] > # Per default all metrics are enabled.
	I0408 22:56:32.904834   21772 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0408 22:56:32.904840   21772 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0408 22:56:32.904847   21772 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0408 22:56:32.904853   21772 command_runner.go:130] > # metrics_collectors = [
	I0408 22:56:32.904859   21772 command_runner.go:130] > # 	"operations",
	I0408 22:56:32.904866   21772 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0408 22:56:32.904871   21772 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0408 22:56:32.904875   21772 command_runner.go:130] > # 	"operations_errors",
	I0408 22:56:32.904879   21772 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0408 22:56:32.904882   21772 command_runner.go:130] > # 	"image_pulls_by_name",
	I0408 22:56:32.904888   21772 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0408 22:56:32.904892   21772 command_runner.go:130] > # 	"image_pulls_failures",
	I0408 22:56:32.904895   21772 command_runner.go:130] > # 	"image_pulls_successes",
	I0408 22:56:32.904899   21772 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0408 22:56:32.904903   21772 command_runner.go:130] > # 	"image_layer_reuse",
	I0408 22:56:32.904907   21772 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0408 22:56:32.904911   21772 command_runner.go:130] > # 	"containers_oom_total",
	I0408 22:56:32.904915   21772 command_runner.go:130] > # 	"containers_oom",
	I0408 22:56:32.904918   21772 command_runner.go:130] > # 	"processes_defunct",
	I0408 22:56:32.904922   21772 command_runner.go:130] > # 	"operations_total",
	I0408 22:56:32.904929   21772 command_runner.go:130] > # 	"operations_latency_seconds",
	I0408 22:56:32.904933   21772 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0408 22:56:32.904937   21772 command_runner.go:130] > # 	"operations_errors_total",
	I0408 22:56:32.904947   21772 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0408 22:56:32.904955   21772 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0408 22:56:32.904959   21772 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0408 22:56:32.904963   21772 command_runner.go:130] > # 	"image_pulls_success_total",
	I0408 22:56:32.904967   21772 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0408 22:56:32.904971   21772 command_runner.go:130] > # 	"containers_oom_count_total",
	I0408 22:56:32.904981   21772 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0408 22:56:32.904988   21772 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0408 22:56:32.904991   21772 command_runner.go:130] > # ]
	I0408 22:56:32.905000   21772 command_runner.go:130] > # The port on which the metrics server will listen.
	I0408 22:56:32.905006   21772 command_runner.go:130] > # metrics_port = 9090
	I0408 22:56:32.905011   21772 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0408 22:56:32.905014   21772 command_runner.go:130] > # metrics_socket = ""
	I0408 22:56:32.905019   21772 command_runner.go:130] > # The certificate for the secure metrics server.
	I0408 22:56:32.905024   21772 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0408 22:56:32.905033   21772 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0408 22:56:32.905037   21772 command_runner.go:130] > # certificate on any modification event.
	I0408 22:56:32.905043   21772 command_runner.go:130] > # metrics_cert = ""
	I0408 22:56:32.905048   21772 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0408 22:56:32.905052   21772 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0408 22:56:32.905058   21772 command_runner.go:130] > # metrics_key = ""
	I0408 22:56:32.905064   21772 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0408 22:56:32.905070   21772 command_runner.go:130] > [crio.tracing]
	I0408 22:56:32.905075   21772 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0408 22:56:32.905079   21772 command_runner.go:130] > # enable_tracing = false
	I0408 22:56:32.905087   21772 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0408 22:56:32.905091   21772 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0408 22:56:32.905097   21772 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0408 22:56:32.905104   21772 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0408 22:56:32.905108   21772 command_runner.go:130] > # CRI-O NRI configuration.
	I0408 22:56:32.905113   21772 command_runner.go:130] > [crio.nri]
	I0408 22:56:32.905117   21772 command_runner.go:130] > # Globally enable or disable NRI.
	I0408 22:56:32.905125   21772 command_runner.go:130] > # enable_nri = false
	I0408 22:56:32.905129   21772 command_runner.go:130] > # NRI socket to listen on.
	I0408 22:56:32.905136   21772 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0408 22:56:32.905139   21772 command_runner.go:130] > # NRI plugin directory to use.
	I0408 22:56:32.905144   21772 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0408 22:56:32.905148   21772 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0408 22:56:32.905155   21772 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0408 22:56:32.905164   21772 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0408 22:56:32.905171   21772 command_runner.go:130] > # nri_disable_connections = false
	I0408 22:56:32.905175   21772 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0408 22:56:32.905182   21772 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0408 22:56:32.905186   21772 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0408 22:56:32.905193   21772 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0408 22:56:32.905199   21772 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0408 22:56:32.905204   21772 command_runner.go:130] > [crio.stats]
	I0408 22:56:32.905210   21772 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0408 22:56:32.905217   21772 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0408 22:56:32.905223   21772 command_runner.go:130] > # stats_collection_period = 0
	I0408 22:56:32.905256   21772 command_runner.go:130] ! time="2025-04-08 22:56:32.868436253Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0408 22:56:32.905274   21772 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0408 22:56:32.905342   21772 cni.go:84] Creating CNI manager for ""
	I0408 22:56:32.905354   21772 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 22:56:32.905364   21772 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 22:56:32.905388   21772 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.234 APIServerPort:8441 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-546336 NodeName:functional-546336 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.234"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.234 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 22:56:32.905493   21772 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.234
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-546336"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.234"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.234"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 22:56:32.905580   21772 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0408 22:56:32.914548   21772 command_runner.go:130] > kubeadm
	I0408 22:56:32.914564   21772 command_runner.go:130] > kubectl
	I0408 22:56:32.914568   21772 command_runner.go:130] > kubelet
	I0408 22:56:32.914646   21772 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 22:56:32.914718   21772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 22:56:32.923150   21772 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0408 22:56:32.938212   21772 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 22:56:32.953395   21772 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I0408 22:56:32.968282   21772 ssh_runner.go:195] Run: grep 192.168.39.234	control-plane.minikube.internal$ /etc/hosts
	I0408 22:56:32.971857   21772 command_runner.go:130] > 192.168.39.234	control-plane.minikube.internal
	I0408 22:56:32.971923   21772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 22:56:33.097315   21772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 22:56:33.112048   21772 certs.go:68] Setting up /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336 for IP: 192.168.39.234
	I0408 22:56:33.112066   21772 certs.go:194] generating shared ca certs ...
	I0408 22:56:33.112083   21772 certs.go:226] acquiring lock for ca certs: {Name:mk0d455aae85017ac942481bbc1202ccedea144f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 22:56:33.112251   21772 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.key
	I0408 22:56:33.112294   21772 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.key
	I0408 22:56:33.112308   21772 certs.go:256] generating profile certs ...
	I0408 22:56:33.112383   21772 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/client.key
	I0408 22:56:33.112451   21772 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/apiserver.key.848fae18
	I0408 22:56:33.112486   21772 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/proxy-client.key
	I0408 22:56:33.112495   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0408 22:56:33.112506   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0408 22:56:33.112517   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0408 22:56:33.112526   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0408 22:56:33.112540   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0408 22:56:33.112552   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0408 22:56:33.112561   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0408 22:56:33.112572   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0408 22:56:33.112624   21772 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314.pem (1338 bytes)
	W0408 22:56:33.112665   21772 certs.go:480] ignoring /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314_empty.pem, impossibly tiny 0 bytes
	I0408 22:56:33.112678   21772 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 22:56:33.112704   21772 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem (1082 bytes)
	I0408 22:56:33.112735   21772 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem (1123 bytes)
	I0408 22:56:33.112774   21772 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem (1675 bytes)
	I0408 22:56:33.112819   21772 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem (1708 bytes)
	I0408 22:56:33.112860   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem -> /usr/share/ca-certificates/163142.pem
	I0408 22:56:33.112879   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0408 22:56:33.112897   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314.pem -> /usr/share/ca-certificates/16314.pem
	I0408 22:56:33.113475   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 22:56:33.137877   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 22:56:33.159070   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 22:56:33.185298   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0408 22:56:33.207770   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0408 22:56:33.228856   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0408 22:56:33.251027   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 22:56:33.272315   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 22:56:33.294625   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem --> /usr/share/ca-certificates/163142.pem (1708 bytes)
	I0408 22:56:33.316217   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 22:56:33.337786   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314.pem --> /usr/share/ca-certificates/16314.pem (1338 bytes)
	I0408 22:56:33.358722   21772 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0408 22:56:33.373131   21772 ssh_runner.go:195] Run: openssl version
	I0408 22:56:33.378702   21772 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0408 22:56:33.378755   21772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163142.pem && ln -fs /usr/share/ca-certificates/163142.pem /etc/ssl/certs/163142.pem"
	I0408 22:56:33.388262   21772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163142.pem
	I0408 22:56:33.392059   21772 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr  8 22:53 /usr/share/ca-certificates/163142.pem
	I0408 22:56:33.392090   21772 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 22:53 /usr/share/ca-certificates/163142.pem
	I0408 22:56:33.392135   21772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163142.pem
	I0408 22:56:33.397236   21772 command_runner.go:130] > 3ec20f2e
	I0408 22:56:33.397295   21772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163142.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 22:56:33.405382   21772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 22:56:33.414578   21772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 22:56:33.418346   21772 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr  8 22:46 /usr/share/ca-certificates/minikubeCA.pem
	I0408 22:56:33.418448   21772 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 22:46 /usr/share/ca-certificates/minikubeCA.pem
	I0408 22:56:33.418490   21772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 22:56:33.423400   21772 command_runner.go:130] > b5213941
	I0408 22:56:33.423452   21772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 22:56:33.431557   21772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16314.pem && ln -fs /usr/share/ca-certificates/16314.pem /etc/ssl/certs/16314.pem"
	I0408 22:56:33.442046   21772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16314.pem
	I0408 22:56:33.446095   21772 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr  8 22:53 /usr/share/ca-certificates/16314.pem
	I0408 22:56:33.446156   21772 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 22:53 /usr/share/ca-certificates/16314.pem
	I0408 22:56:33.446198   21772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16314.pem
	I0408 22:56:33.451257   21772 command_runner.go:130] > 51391683
	I0408 22:56:33.451490   21772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16314.pem /etc/ssl/certs/51391683.0"
	I0408 22:56:33.460149   21772 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 22:56:33.463927   21772 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 22:56:33.463942   21772 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0408 22:56:33.463948   21772 command_runner.go:130] > Device: 253,1	Inode: 7338542     Links: 1
	I0408 22:56:33.463973   21772 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0408 22:56:33.463986   21772 command_runner.go:130] > Access: 2025-04-08 22:54:04.578144403 +0000
	I0408 22:56:33.463994   21772 command_runner.go:130] > Modify: 2025-04-08 22:54:04.578144403 +0000
	I0408 22:56:33.464003   21772 command_runner.go:130] > Change: 2025-04-08 22:54:04.578144403 +0000
	I0408 22:56:33.464008   21772 command_runner.go:130] >  Birth: 2025-04-08 22:54:04.578144403 +0000
	I0408 22:56:33.464063   21772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 22:56:33.469050   21772 command_runner.go:130] > Certificate will not expire
	I0408 22:56:33.469263   21772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 22:56:33.474068   21772 command_runner.go:130] > Certificate will not expire
	I0408 22:56:33.474186   21772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 22:56:33.478955   21772 command_runner.go:130] > Certificate will not expire
	I0408 22:56:33.479120   21772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 22:56:33.484075   21772 command_runner.go:130] > Certificate will not expire
	I0408 22:56:33.484130   21772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 22:56:33.488910   21772 command_runner.go:130] > Certificate will not expire
	I0408 22:56:33.488951   21772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 22:56:33.493716   21772 command_runner.go:130] > Certificate will not expire
	I0408 22:56:33.493900   21772 kubeadm.go:392] StartCluster: {Name:functional-546336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-5463
36 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 22:56:33.493993   21772 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 22:56:33.494051   21772 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 22:56:33.531075   21772 command_runner.go:130] > f5383886ea44f313131ed72cfa49949bd8b5d2f4873b4e32d81e980ce2940fd0
	I0408 22:56:33.531123   21772 command_runner.go:130] > c7dc125c272a5b82818d376a354500ce1464ae56dbc755c0a0779f8c284bfc5c
	I0408 22:56:33.531134   21772 command_runner.go:130] > 0b6b87627794e032e13a615d5ea5b991ffef3843bb9ae6e9f153041eb733d782
	I0408 22:56:33.531145   21772 command_runner.go:130] > d19ba2e208f70c1259cbd17e3566fbc44b8b4d173fc3026591fa8cea6fad11ec
	I0408 22:56:33.531154   21772 command_runner.go:130] > a02e7488bb5d2cf6fe89c9af4932fa61408d59b64f5415e68dd23aef1e5f092c
	I0408 22:56:33.531170   21772 command_runner.go:130] > e50303177ff5685821e81bebd1db71a63709a33fc7ff89cb111d923979605c70
	I0408 22:56:33.531180   21772 command_runner.go:130] > d31c5cb795e76e9631c155d0b7e96672025f714ef26b9972bd87e759a40f7e4d
	I0408 22:56:33.531194   21772 command_runner.go:130] > 090c0b802b3a3a27f9446836815daacc48a4cfa1ed0b54043325e0eada99d664
	I0408 22:56:33.531207   21772 command_runner.go:130] > f5685f897beb5418ec57fb5f80ab70b0ffe4b406ecf635a6249b528e65cabfc4
	I0408 22:56:33.531221   21772 command_runner.go:130] > 31aa14fb57438b0d736b005aed16b4fb5438a2d6ce4af59f042b06d0271bcaa5
	I0408 22:56:33.531245   21772 cri.go:89] found id: "f5383886ea44f313131ed72cfa49949bd8b5d2f4873b4e32d81e980ce2940fd0"
	I0408 22:56:33.531257   21772 cri.go:89] found id: "c7dc125c272a5b82818d376a354500ce1464ae56dbc755c0a0779f8c284bfc5c"
	I0408 22:56:33.531266   21772 cri.go:89] found id: "0b6b87627794e032e13a615d5ea5b991ffef3843bb9ae6e9f153041eb733d782"
	I0408 22:56:33.531275   21772 cri.go:89] found id: "d19ba2e208f70c1259cbd17e3566fbc44b8b4d173fc3026591fa8cea6fad11ec"
	I0408 22:56:33.531284   21772 cri.go:89] found id: "a02e7488bb5d2cf6fe89c9af4932fa61408d59b64f5415e68dd23aef1e5f092c"
	I0408 22:56:33.531294   21772 cri.go:89] found id: "e50303177ff5685821e81bebd1db71a63709a33fc7ff89cb111d923979605c70"
	I0408 22:56:33.531302   21772 cri.go:89] found id: "d31c5cb795e76e9631c155d0b7e96672025f714ef26b9972bd87e759a40f7e4d"
	I0408 22:56:33.531308   21772 cri.go:89] found id: "090c0b802b3a3a27f9446836815daacc48a4cfa1ed0b54043325e0eada99d664"
	I0408 22:56:33.531312   21772 cri.go:89] found id: "f5685f897beb5418ec57fb5f80ab70b0ffe4b406ecf635a6249b528e65cabfc4"
	I0408 22:56:33.531318   21772 cri.go:89] found id: "31aa14fb57438b0d736b005aed16b4fb5438a2d6ce4af59f042b06d0271bcaa5"
	I0408 22:56:33.531323   21772 cri.go:89] found id: ""
	I0408 22:56:33.531374   21772 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-546336 -n functional-546336
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-546336 -n functional-546336: exit status 2 (218.206172ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-546336" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (336.20s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (336.41s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-546336 get pods
functional_test.go:758: (dbg) Non-zero exit: out/kubectl --context functional-546336 get pods: exit status 1 (91.463169ms)

                                                
                                                
** stderr ** 
	E0408 23:25:43.394512   29247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.234:8441/api?timeout=32s\": dial tcp 192.168.39.234:8441: connect: connection refused"
	E0408 23:25:43.396082   29247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.234:8441/api?timeout=32s\": dial tcp 192.168.39.234:8441: connect: connection refused"
	E0408 23:25:43.397552   29247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.234:8441/api?timeout=32s\": dial tcp 192.168.39.234:8441: connect: connection refused"
	E0408 23:25:43.398995   29247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.234:8441/api?timeout=32s\": dial tcp 192.168.39.234:8441: connect: connection refused"
	E0408 23:25:43.400462   29247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.234:8441/api?timeout=32s\": dial tcp 192.168.39.234:8441: connect: connection refused"
	The connection to the server 192.168.39.234:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:761: failed to run kubectl directly. args "out/kubectl --context functional-546336 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-546336 -n functional-546336
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-546336 -n functional-546336: exit status 2 (214.235184ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-546336 logs -n 25
E0408 23:26:01.012027   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
E0408 23:27:57.932440   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-546336 logs -n 25: (5m35.833755004s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                    Args                     |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| pause   | nospam-715453 --log_dir                     | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 pause                    |                   |         |         |                     |                     |
	| unpause | nospam-715453 --log_dir                     | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 unpause                  |                   |         |         |                     |                     |
	| unpause | nospam-715453 --log_dir                     | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 unpause                  |                   |         |         |                     |                     |
	| unpause | nospam-715453 --log_dir                     | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 unpause                  |                   |         |         |                     |                     |
	| stop    | nospam-715453 --log_dir                     | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 stop                     |                   |         |         |                     |                     |
	| stop    | nospam-715453 --log_dir                     | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 stop                     |                   |         |         |                     |                     |
	| stop    | nospam-715453 --log_dir                     | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 stop                     |                   |         |         |                     |                     |
	| delete  | -p nospam-715453                            | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	| start   | -p functional-546336                        | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:54 UTC |
	|         | --memory=4000                               |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                       |                   |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                    |                   |         |         |                     |                     |
	|         | --container-runtime=crio                    |                   |         |         |                     |                     |
	| start   | -p functional-546336                        | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 22:54 UTC |                     |
	|         | --alsologtostderr -v=8                      |                   |         |         |                     |                     |
	| cache   | functional-546336 cache add                 | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:19 UTC | 08 Apr 25 23:20 UTC |
	|         | registry.k8s.io/pause:3.1                   |                   |         |         |                     |                     |
	| cache   | functional-546336 cache add                 | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	|         | registry.k8s.io/pause:3.3                   |                   |         |         |                     |                     |
	| cache   | functional-546336 cache add                 | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	|         | registry.k8s.io/pause:latest                |                   |         |         |                     |                     |
	| cache   | functional-546336 cache add                 | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	|         | minikube-local-cache-test:functional-546336 |                   |         |         |                     |                     |
	| cache   | functional-546336 cache delete              | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	|         | minikube-local-cache-test:functional-546336 |                   |         |         |                     |                     |
	| cache   | delete                                      | minikube          | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	|         | registry.k8s.io/pause:3.3                   |                   |         |         |                     |                     |
	| cache   | list                                        | minikube          | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	| ssh     | functional-546336 ssh sudo                  | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	|         | crictl images                               |                   |         |         |                     |                     |
	| ssh     | functional-546336                           | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	|         | ssh sudo crictl rmi                         |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                |                   |         |         |                     |                     |
	| ssh     | functional-546336 ssh                       | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC |                     |
	|         | sudo crictl inspecti                        |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                |                   |         |         |                     |                     |
	| cache   | functional-546336 cache reload              | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	| ssh     | functional-546336 ssh                       | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	|         | sudo crictl inspecti                        |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                |                   |         |         |                     |                     |
	| cache   | delete                                      | minikube          | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	|         | registry.k8s.io/pause:3.1                   |                   |         |         |                     |                     |
	| cache   | delete                                      | minikube          | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	|         | registry.k8s.io/pause:latest                |                   |         |         |                     |                     |
	| kubectl | functional-546336 kubectl --                | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC |                     |
	|         | --context functional-546336                 |                   |         |         |                     |                     |
	|         | get pods                                    |                   |         |         |                     |                     |
	|---------|---------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/08 22:54:53
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 22:54:53.750429   21772 out.go:345] Setting OutFile to fd 1 ...
	I0408 22:54:53.750673   21772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 22:54:53.750778   21772 out.go:358] Setting ErrFile to fd 2...
	I0408 22:54:53.750790   21772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 22:54:53.751041   21772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20501-9125/.minikube/bin
	I0408 22:54:53.751600   21772 out.go:352] Setting JSON to false
	I0408 22:54:53.752542   21772 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2239,"bootTime":1744150655,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 22:54:53.752628   21772 start.go:139] virtualization: kvm guest
	I0408 22:54:53.754529   21772 out.go:177] * [functional-546336] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 22:54:53.755700   21772 out.go:177]   - MINIKUBE_LOCATION=20501
	I0408 22:54:53.755700   21772 notify.go:220] Checking for updates...
	I0408 22:54:53.757645   21772 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 22:54:53.758881   21772 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20501-9125/kubeconfig
	I0408 22:54:53.760110   21772 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20501-9125/.minikube
	I0408 22:54:53.761221   21772 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 22:54:53.762262   21772 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 22:54:53.764007   21772 config.go:182] Loaded profile config "functional-546336": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 22:54:53.764090   21772 driver.go:404] Setting default libvirt URI to qemu:///system
	I0408 22:54:53.764531   21772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:54:53.764591   21772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:54:53.780528   21772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37317
	I0408 22:54:53.780962   21772 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:54:53.781388   21772 main.go:141] libmachine: Using API Version  1
	I0408 22:54:53.781409   21772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:54:53.781752   21772 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:54:53.781914   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:54:53.819375   21772 out.go:177] * Using the kvm2 driver based on existing profile
	I0408 22:54:53.820528   21772 start.go:297] selected driver: kvm2
	I0408 22:54:53.820538   21772 start.go:901] validating driver "kvm2" against &{Name:functional-546336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-546336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 22:54:53.820619   21772 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 22:54:53.820910   21772 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 22:54:53.820988   21772 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20501-9125/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 22:54:53.835403   21772 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0408 22:54:53.836289   21772 cni.go:84] Creating CNI manager for ""
	I0408 22:54:53.836343   21772 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 22:54:53.836403   21772 start.go:340] cluster config:
	{Name:functional-546336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-546336 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 22:54:53.836507   21772 iso.go:125] acquiring lock: {Name:mk618477bad490b102618c53c9c8c6b34f33ce81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 22:54:53.838584   21772 out.go:177] * Starting "functional-546336" primary control-plane node in "functional-546336" cluster
	I0408 22:54:53.839517   21772 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0408 22:54:53.839549   21772 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20501-9125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0408 22:54:53.839557   21772 cache.go:56] Caching tarball of preloaded images
	I0408 22:54:53.839620   21772 preload.go:172] Found /home/jenkins/minikube-integration/20501-9125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 22:54:53.839629   21772 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0408 22:54:53.839708   21772 profile.go:143] Saving config to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/config.json ...
	I0408 22:54:53.839890   21772 start.go:360] acquireMachinesLock for functional-546336: {Name:mke7be7b51cfddf557a39ecf6493fff6a1168ec9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 22:54:53.839934   21772 start.go:364] duration metric: took 24.616µs to acquireMachinesLock for "functional-546336"
	I0408 22:54:53.839951   21772 start.go:96] Skipping create...Using existing machine configuration
	I0408 22:54:53.839957   21772 fix.go:54] fixHost starting: 
	I0408 22:54:53.840198   21772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 22:54:53.840227   21772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 22:54:53.853842   21772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36351
	I0408 22:54:53.854248   21772 main.go:141] libmachine: () Calling .GetVersion
	I0408 22:54:53.854642   21772 main.go:141] libmachine: Using API Version  1
	I0408 22:54:53.854660   21772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 22:54:53.854972   21772 main.go:141] libmachine: () Calling .GetMachineName
	I0408 22:54:53.855161   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:54:53.855314   21772 main.go:141] libmachine: (functional-546336) Calling .GetState
	I0408 22:54:53.856978   21772 fix.go:112] recreateIfNeeded on functional-546336: state=Running err=<nil>
	W0408 22:54:53.856995   21772 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 22:54:53.858448   21772 out.go:177] * Updating the running kvm2 "functional-546336" VM ...
	I0408 22:54:53.859370   21772 machine.go:93] provisionDockerMachine start ...
	I0408 22:54:53.859389   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:54:53.859573   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:54:53.861808   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:53.862195   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:53.862223   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:53.862331   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:54:53.862495   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:53.862642   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:53.862769   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:54:53.862913   21772 main.go:141] libmachine: Using SSH client type: native
	I0408 22:54:53.863111   21772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0408 22:54:53.863123   21772 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 22:54:53.975743   21772 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-546336
	
	I0408 22:54:53.975774   21772 main.go:141] libmachine: (functional-546336) Calling .GetMachineName
	I0408 22:54:53.976060   21772 buildroot.go:166] provisioning hostname "functional-546336"
	I0408 22:54:53.976090   21772 main.go:141] libmachine: (functional-546336) Calling .GetMachineName
	I0408 22:54:53.976275   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:54:53.978794   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:53.979136   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:53.979155   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:53.979343   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:54:53.979538   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:53.979686   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:53.979818   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:54:53.979975   21772 main.go:141] libmachine: Using SSH client type: native
	I0408 22:54:53.980186   21772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0408 22:54:53.980207   21772 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-546336 && echo "functional-546336" | sudo tee /etc/hostname
	I0408 22:54:54.107226   21772 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-546336
	
	I0408 22:54:54.107256   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:54:54.110121   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.110402   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:54.110442   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.110575   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:54:54.110737   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:54.110870   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:54.110984   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:54:54.111111   21772 main.go:141] libmachine: Using SSH client type: native
	I0408 22:54:54.111332   21772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0408 22:54:54.111355   21772 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-546336' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-546336/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-546336' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 22:54:54.224292   21772 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 22:54:54.224321   21772 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20501-9125/.minikube CaCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20501-9125/.minikube}
	I0408 22:54:54.224341   21772 buildroot.go:174] setting up certificates
	I0408 22:54:54.224352   21772 provision.go:84] configureAuth start
	I0408 22:54:54.224363   21772 main.go:141] libmachine: (functional-546336) Calling .GetMachineName
	I0408 22:54:54.224632   21772 main.go:141] libmachine: (functional-546336) Calling .GetIP
	I0408 22:54:54.227055   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.227343   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:54.227372   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.227496   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:54:54.229707   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.230025   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:54.230063   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.230204   21772 provision.go:143] copyHostCerts
	I0408 22:54:54.230228   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem
	I0408 22:54:54.230253   21772 exec_runner.go:144] found /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem, removing ...
	I0408 22:54:54.230267   21772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem
	I0408 22:54:54.230331   21772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem (1082 bytes)
	I0408 22:54:54.230397   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem
	I0408 22:54:54.230414   21772 exec_runner.go:144] found /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem, removing ...
	I0408 22:54:54.230421   21772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem
	I0408 22:54:54.230442   21772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem (1123 bytes)
	I0408 22:54:54.230555   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem
	I0408 22:54:54.230580   21772 exec_runner.go:144] found /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem, removing ...
	I0408 22:54:54.230584   21772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem
	I0408 22:54:54.230614   21772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem (1675 bytes)
	I0408 22:54:54.230663   21772 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem org=jenkins.functional-546336 san=[127.0.0.1 192.168.39.234 functional-546336 localhost minikube]
	I0408 22:54:54.377433   21772 provision.go:177] copyRemoteCerts
	I0408 22:54:54.377494   21772 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 22:54:54.377516   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:54:54.379910   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.380186   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:54.380208   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.380353   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:54:54.380512   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:54.380651   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:54:54.380759   21772 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/functional-546336/id_rsa Username:docker}
	I0408 22:54:54.469346   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0408 22:54:54.469406   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 22:54:54.492119   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0408 22:54:54.492170   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0408 22:54:54.515795   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0408 22:54:54.515854   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0408 22:54:54.538157   21772 provision.go:87] duration metric: took 313.794377ms to configureAuth
	I0408 22:54:54.538179   21772 buildroot.go:189] setting minikube options for container-runtime
	I0408 22:54:54.538348   21772 config.go:182] Loaded profile config "functional-546336": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 22:54:54.538415   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:54:54.540893   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.541189   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:54:54.541211   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:54:54.541388   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:54:54.541569   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:54.541794   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:54:54.541956   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:54:54.542154   21772 main.go:141] libmachine: Using SSH client type: native
	I0408 22:54:54.542410   21772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0408 22:54:54.542429   21772 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 22:55:00.049143   21772 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 22:55:00.049177   21772 machine.go:96] duration metric: took 6.189793928s to provisionDockerMachine
	I0408 22:55:00.049193   21772 start.go:293] postStartSetup for "functional-546336" (driver="kvm2")
	I0408 22:55:00.049216   21772 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 22:55:00.049238   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:55:00.049527   21772 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 22:55:00.049554   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:55:00.052053   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.052329   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:55:00.052357   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.052449   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:55:00.052621   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:55:00.052774   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:55:00.052915   21772 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/functional-546336/id_rsa Username:docker}
	I0408 22:55:00.137252   21772 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 22:55:00.140999   21772 command_runner.go:130] > NAME=Buildroot
	I0408 22:55:00.141018   21772 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0408 22:55:00.141022   21772 command_runner.go:130] > ID=buildroot
	I0408 22:55:00.141034   21772 command_runner.go:130] > VERSION_ID=2023.02.9
	I0408 22:55:00.141041   21772 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0408 22:55:00.141078   21772 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 22:55:00.141091   21772 filesync.go:126] Scanning /home/jenkins/minikube-integration/20501-9125/.minikube/addons for local assets ...
	I0408 22:55:00.141153   21772 filesync.go:126] Scanning /home/jenkins/minikube-integration/20501-9125/.minikube/files for local assets ...
	I0408 22:55:00.141241   21772 filesync.go:149] local asset: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem -> 163142.pem in /etc/ssl/certs
	I0408 22:55:00.141253   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem -> /etc/ssl/certs/163142.pem
	I0408 22:55:00.141327   21772 filesync.go:149] local asset: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/test/nested/copy/16314/hosts -> hosts in /etc/test/nested/copy/16314
	I0408 22:55:00.141336   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/test/nested/copy/16314/hosts -> /etc/test/nested/copy/16314/hosts
	I0408 22:55:00.141386   21772 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/16314
	I0408 22:55:00.149913   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem --> /etc/ssl/certs/163142.pem (1708 bytes)
	I0408 22:55:00.172587   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/test/nested/copy/16314/hosts --> /etc/test/nested/copy/16314/hosts (40 bytes)
	I0408 22:55:00.194320   21772 start.go:296] duration metric: took 145.104306ms for postStartSetup
	I0408 22:55:00.194353   21772 fix.go:56] duration metric: took 6.354395244s for fixHost
	I0408 22:55:00.194371   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:55:00.197105   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.197468   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:55:00.197508   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.197619   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:55:00.197806   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:55:00.197977   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:55:00.198135   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:55:00.198315   21772 main.go:141] libmachine: Using SSH client type: native
	I0408 22:55:00.198518   21772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0408 22:55:00.198529   21772 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 22:55:00.312401   21772 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744152900.293880637
	
	I0408 22:55:00.312424   21772 fix.go:216] guest clock: 1744152900.293880637
	I0408 22:55:00.312432   21772 fix.go:229] Guest: 2025-04-08 22:55:00.293880637 +0000 UTC Remote: 2025-04-08 22:55:00.194356923 +0000 UTC m=+6.478226412 (delta=99.523714ms)
	I0408 22:55:00.312463   21772 fix.go:200] guest clock delta is within tolerance: 99.523714ms
	I0408 22:55:00.312469   21772 start.go:83] releasing machines lock for "functional-546336", held for 6.472524067s
	I0408 22:55:00.312490   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:55:00.312723   21772 main.go:141] libmachine: (functional-546336) Calling .GetIP
	I0408 22:55:00.315235   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.315592   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:55:00.315620   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.315756   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:55:00.316286   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:55:00.316432   21772 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 22:55:00.316535   21772 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 22:55:00.316574   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:55:00.316683   21772 ssh_runner.go:195] Run: cat /version.json
	I0408 22:55:00.316708   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 22:55:00.319048   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.319325   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.319354   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:55:00.319371   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.319522   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:55:00.319696   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:55:00.319776   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:55:00.319817   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:55:00.319891   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:55:00.319984   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 22:55:00.320037   21772 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/functional-546336/id_rsa Username:docker}
	I0408 22:55:00.320121   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 22:55:00.320259   21772 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 22:55:00.320368   21772 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/functional-546336/id_rsa Username:docker}
	I0408 22:55:00.470604   21772 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0408 22:55:00.470683   21772 command_runner.go:130] > {"iso_version": "v1.35.0", "kicbase_version": "v0.0.45-1736763277-20236", "minikube_version": "v1.35.0", "commit": "3fb24bd87c8c8761e2515e1a9ee13835a389ed68"}
	I0408 22:55:00.470819   21772 ssh_runner.go:195] Run: systemctl --version
	I0408 22:55:00.499552   21772 command_runner.go:130] > systemd 252 (252)
	I0408 22:55:00.499604   21772 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0408 22:55:00.500041   21772 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 22:55:00.827340   21772 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0408 22:55:00.834963   21772 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0408 22:55:00.835008   21772 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 22:55:00.835072   21772 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 22:55:00.877281   21772 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0408 22:55:00.877304   21772 start.go:495] detecting cgroup driver to use...
	I0408 22:55:00.877378   21772 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 22:55:00.940318   21772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 22:55:01.008191   21772 docker.go:217] disabling cri-docker service (if available) ...
	I0408 22:55:01.008253   21772 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 22:55:01.030120   21772 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 22:55:01.062576   21772 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 22:55:01.269983   21772 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 22:55:01.496425   21772 docker.go:233] disabling docker service ...
	I0408 22:55:01.496502   21772 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 22:55:01.519064   21772 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 22:55:01.540326   21772 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 22:55:01.741595   21772 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 22:55:01.913173   21772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 22:55:01.927297   21772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 22:55:01.950625   21772 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0408 22:55:01.951000   21772 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0408 22:55:01.951058   21772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:01.962726   21772 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 22:55:01.962790   21772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:01.974651   21772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:01.985351   21772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:01.996381   21772 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 22:55:02.012061   21772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:02.024694   21772 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:02.036195   21772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 22:55:02.045483   21772 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 22:55:02.053886   21772 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0408 22:55:02.053960   21772 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 22:55:02.066815   21772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 22:55:02.213651   21772 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 22:56:32.679193   21772 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.465496752s)
	I0408 22:56:32.679231   21772 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 22:56:32.679281   21772 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 22:56:32.684914   21772 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0408 22:56:32.684956   21772 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0408 22:56:32.684981   21772 command_runner.go:130] > Device: 0,22	Inode: 1501        Links: 1
	I0408 22:56:32.684990   21772 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0408 22:56:32.684996   21772 command_runner.go:130] > Access: 2025-04-08 22:56:32.503580497 +0000
	I0408 22:56:32.685001   21772 command_runner.go:130] > Modify: 2025-04-08 22:56:32.503580497 +0000
	I0408 22:56:32.685010   21772 command_runner.go:130] > Change: 2025-04-08 22:56:32.503580497 +0000
	I0408 22:56:32.685013   21772 command_runner.go:130] >  Birth: -
	I0408 22:56:32.685205   21772 start.go:563] Will wait 60s for crictl version
	I0408 22:56:32.685262   21772 ssh_runner.go:195] Run: which crictl
	I0408 22:56:32.688828   21772 command_runner.go:130] > /usr/bin/crictl
	I0408 22:56:32.688893   21772 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 22:56:32.724970   21772 command_runner.go:130] > Version:  0.1.0
	I0408 22:56:32.724989   21772 command_runner.go:130] > RuntimeName:  cri-o
	I0408 22:56:32.724994   21772 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0408 22:56:32.724998   21772 command_runner.go:130] > RuntimeApiVersion:  v1
	I0408 22:56:32.725893   21772 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 22:56:32.725977   21772 ssh_runner.go:195] Run: crio --version
	I0408 22:56:32.752723   21772 command_runner.go:130] > crio version 1.29.1
	I0408 22:56:32.752740   21772 command_runner.go:130] > Version:        1.29.1
	I0408 22:56:32.752746   21772 command_runner.go:130] > GitCommit:      unknown
	I0408 22:56:32.752750   21772 command_runner.go:130] > GitCommitDate:  unknown
	I0408 22:56:32.752754   21772 command_runner.go:130] > GitTreeState:   clean
	I0408 22:56:32.752759   21772 command_runner.go:130] > BuildDate:      2025-01-14T08:57:58Z
	I0408 22:56:32.752763   21772 command_runner.go:130] > GoVersion:      go1.21.6
	I0408 22:56:32.752767   21772 command_runner.go:130] > Compiler:       gc
	I0408 22:56:32.752771   21772 command_runner.go:130] > Platform:       linux/amd64
	I0408 22:56:32.752775   21772 command_runner.go:130] > Linkmode:       dynamic
	I0408 22:56:32.752779   21772 command_runner.go:130] > BuildTags:      
	I0408 22:56:32.752783   21772 command_runner.go:130] >   containers_image_ostree_stub
	I0408 22:56:32.752787   21772 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0408 22:56:32.752791   21772 command_runner.go:130] >   btrfs_noversion
	I0408 22:56:32.752795   21772 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0408 22:56:32.752800   21772 command_runner.go:130] >   libdm_no_deferred_remove
	I0408 22:56:32.752804   21772 command_runner.go:130] >   seccomp
	I0408 22:56:32.752810   21772 command_runner.go:130] > LDFlags:          unknown
	I0408 22:56:32.752814   21772 command_runner.go:130] > SeccompEnabled:   true
	I0408 22:56:32.752818   21772 command_runner.go:130] > AppArmorEnabled:  false
	I0408 22:56:32.753859   21772 ssh_runner.go:195] Run: crio --version
	I0408 22:56:32.778913   21772 command_runner.go:130] > crio version 1.29.1
	I0408 22:56:32.778948   21772 command_runner.go:130] > Version:        1.29.1
	I0408 22:56:32.778957   21772 command_runner.go:130] > GitCommit:      unknown
	I0408 22:56:32.778962   21772 command_runner.go:130] > GitCommitDate:  unknown
	I0408 22:56:32.778967   21772 command_runner.go:130] > GitTreeState:   clean
	I0408 22:56:32.778975   21772 command_runner.go:130] > BuildDate:      2025-01-14T08:57:58Z
	I0408 22:56:32.778980   21772 command_runner.go:130] > GoVersion:      go1.21.6
	I0408 22:56:32.778986   21772 command_runner.go:130] > Compiler:       gc
	I0408 22:56:32.778993   21772 command_runner.go:130] > Platform:       linux/amd64
	I0408 22:56:32.779002   21772 command_runner.go:130] > Linkmode:       dynamic
	I0408 22:56:32.779012   21772 command_runner.go:130] > BuildTags:      
	I0408 22:56:32.779020   21772 command_runner.go:130] >   containers_image_ostree_stub
	I0408 22:56:32.779030   21772 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0408 22:56:32.779037   21772 command_runner.go:130] >   btrfs_noversion
	I0408 22:56:32.779048   21772 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0408 22:56:32.779056   21772 command_runner.go:130] >   libdm_no_deferred_remove
	I0408 22:56:32.779064   21772 command_runner.go:130] >   seccomp
	I0408 22:56:32.779072   21772 command_runner.go:130] > LDFlags:          unknown
	I0408 22:56:32.779080   21772 command_runner.go:130] > SeccompEnabled:   true
	I0408 22:56:32.779090   21772 command_runner.go:130] > AppArmorEnabled:  false
	I0408 22:56:32.780946   21772 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0408 22:56:32.782109   21772 main.go:141] libmachine: (functional-546336) Calling .GetIP
	I0408 22:56:32.785040   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:56:32.785454   21772 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-08 23:53:47 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 22:56:32.785486   21772 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 22:56:32.785755   21772 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 22:56:32.789792   21772 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0408 22:56:32.790053   21772 kubeadm.go:883] updating cluster {Name:functional-546336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-5
46336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 22:56:32.790145   21772 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0408 22:56:32.790182   21772 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 22:56:32.827503   21772 command_runner.go:130] > {
	I0408 22:56:32.827524   21772 command_runner.go:130] >   "images": [
	I0408 22:56:32.827528   21772 command_runner.go:130] >     {
	I0408 22:56:32.827537   21772 command_runner.go:130] >       "id": "d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56",
	I0408 22:56:32.827541   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.827547   21772 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241212-9f82dd49"
	I0408 22:56:32.827550   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827554   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.827561   21772 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26",
	I0408 22:56:32.827568   21772 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40"
	I0408 22:56:32.827572   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827576   21772 command_runner.go:130] >       "size": "95714353",
	I0408 22:56:32.827579   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.827583   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.827593   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.827600   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.827603   21772 command_runner.go:130] >     },
	I0408 22:56:32.827606   21772 command_runner.go:130] >     {
	I0408 22:56:32.827611   21772 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0408 22:56:32.827614   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.827620   21772 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0408 22:56:32.827624   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827627   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.827635   21772 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0408 22:56:32.827645   21772 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0408 22:56:32.827649   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827657   21772 command_runner.go:130] >       "size": "31470524",
	I0408 22:56:32.827663   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.827667   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.827670   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.827674   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.827677   21772 command_runner.go:130] >     },
	I0408 22:56:32.827681   21772 command_runner.go:130] >     {
	I0408 22:56:32.827689   21772 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0408 22:56:32.827692   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.827697   21772 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0408 22:56:32.827703   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827706   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.827713   21772 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0408 22:56:32.827720   21772 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0408 22:56:32.827724   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827727   21772 command_runner.go:130] >       "size": "63273227",
	I0408 22:56:32.827731   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.827737   21772 command_runner.go:130] >       "username": "nonroot",
	I0408 22:56:32.827740   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.827754   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.827759   21772 command_runner.go:130] >     },
	I0408 22:56:32.827766   21772 command_runner.go:130] >     {
	I0408 22:56:32.827773   21772 command_runner.go:130] >       "id": "a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc",
	I0408 22:56:32.827777   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.827782   21772 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.16-0"
	I0408 22:56:32.827785   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827791   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.827798   21772 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990",
	I0408 22:56:32.827811   21772 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"
	I0408 22:56:32.827816   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827820   21772 command_runner.go:130] >       "size": "151021823",
	I0408 22:56:32.827824   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.827830   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.827833   21772 command_runner.go:130] >       },
	I0408 22:56:32.827837   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.827840   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.827844   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.827846   21772 command_runner.go:130] >     },
	I0408 22:56:32.827850   21772 command_runner.go:130] >     {
	I0408 22:56:32.827858   21772 command_runner.go:130] >       "id": "85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef",
	I0408 22:56:32.827874   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.827882   21772 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.32.2"
	I0408 22:56:32.827890   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827896   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.827908   21772 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d",
	I0408 22:56:32.827916   21772 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"
	I0408 22:56:32.827922   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827925   21772 command_runner.go:130] >       "size": "98055648",
	I0408 22:56:32.827929   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.827932   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.827936   21772 command_runner.go:130] >       },
	I0408 22:56:32.827949   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.827954   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.827958   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.827966   21772 command_runner.go:130] >     },
	I0408 22:56:32.827970   21772 command_runner.go:130] >     {
	I0408 22:56:32.827976   21772 command_runner.go:130] >       "id": "b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389",
	I0408 22:56:32.827982   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.827987   21772 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.32.2"
	I0408 22:56:32.827993   21772 command_runner.go:130] >       ],
	I0408 22:56:32.827996   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.828003   21772 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5",
	I0408 22:56:32.828013   21772 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90"
	I0408 22:56:32.828019   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828022   21772 command_runner.go:130] >       "size": "90793286",
	I0408 22:56:32.828026   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.828029   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.828033   21772 command_runner.go:130] >       },
	I0408 22:56:32.828036   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.828043   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.828046   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.828049   21772 command_runner.go:130] >     },
	I0408 22:56:32.828052   21772 command_runner.go:130] >     {
	I0408 22:56:32.828058   21772 command_runner.go:130] >       "id": "f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5",
	I0408 22:56:32.828064   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.828069   21772 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.32.2"
	I0408 22:56:32.828074   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828078   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.828085   21772 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d",
	I0408 22:56:32.828094   21772 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d"
	I0408 22:56:32.828097   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828102   21772 command_runner.go:130] >       "size": "95271321",
	I0408 22:56:32.828108   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.828111   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.828115   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.828119   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.828124   21772 command_runner.go:130] >     },
	I0408 22:56:32.828131   21772 command_runner.go:130] >     {
	I0408 22:56:32.828140   21772 command_runner.go:130] >       "id": "d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d",
	I0408 22:56:32.828144   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.828150   21772 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.32.2"
	I0408 22:56:32.828159   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828165   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.828207   21772 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76",
	I0408 22:56:32.828220   21772 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b"
	I0408 22:56:32.828223   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828227   21772 command_runner.go:130] >       "size": "70653254",
	I0408 22:56:32.828230   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.828233   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.828236   21772 command_runner.go:130] >       },
	I0408 22:56:32.828239   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.828243   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.828247   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.828250   21772 command_runner.go:130] >     },
	I0408 22:56:32.828253   21772 command_runner.go:130] >     {
	I0408 22:56:32.828259   21772 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0408 22:56:32.828265   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.828269   21772 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0408 22:56:32.828272   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828276   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.828283   21772 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0408 22:56:32.828292   21772 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0408 22:56:32.828295   21772 command_runner.go:130] >       ],
	I0408 22:56:32.828298   21772 command_runner.go:130] >       "size": "742080",
	I0408 22:56:32.828302   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.828305   21772 command_runner.go:130] >         "value": "65535"
	I0408 22:56:32.828308   21772 command_runner.go:130] >       },
	I0408 22:56:32.828312   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.828318   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.828324   21772 command_runner.go:130] >       "pinned": true
	I0408 22:56:32.828331   21772 command_runner.go:130] >     }
	I0408 22:56:32.828334   21772 command_runner.go:130] >   ]
	I0408 22:56:32.828337   21772 command_runner.go:130] > }
	I0408 22:56:32.829120   21772 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 22:56:32.829135   21772 crio.go:433] Images already preloaded, skipping extraction
	I0408 22:56:32.829174   21772 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 22:56:32.860598   21772 command_runner.go:130] > {
	I0408 22:56:32.860616   21772 command_runner.go:130] >   "images": [
	I0408 22:56:32.860620   21772 command_runner.go:130] >     {
	I0408 22:56:32.860628   21772 command_runner.go:130] >       "id": "d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56",
	I0408 22:56:32.860632   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.860637   21772 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241212-9f82dd49"
	I0408 22:56:32.860641   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860645   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.860658   21772 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26",
	I0408 22:56:32.860666   21772 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40"
	I0408 22:56:32.860669   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860674   21772 command_runner.go:130] >       "size": "95714353",
	I0408 22:56:32.860677   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.860682   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.860690   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.860694   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.860699   21772 command_runner.go:130] >     },
	I0408 22:56:32.860702   21772 command_runner.go:130] >     {
	I0408 22:56:32.860708   21772 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0408 22:56:32.860712   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.860719   21772 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0408 22:56:32.860722   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860727   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.860734   21772 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0408 22:56:32.860742   21772 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0408 22:56:32.860746   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860752   21772 command_runner.go:130] >       "size": "31470524",
	I0408 22:56:32.860757   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.860761   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.860764   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.860768   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.860771   21772 command_runner.go:130] >     },
	I0408 22:56:32.860774   21772 command_runner.go:130] >     {
	I0408 22:56:32.860780   21772 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0408 22:56:32.860784   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.860789   21772 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0408 22:56:32.860793   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860797   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.860805   21772 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0408 22:56:32.860814   21772 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0408 22:56:32.860818   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860828   21772 command_runner.go:130] >       "size": "63273227",
	I0408 22:56:32.860834   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.860838   21772 command_runner.go:130] >       "username": "nonroot",
	I0408 22:56:32.860842   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.860848   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.860851   21772 command_runner.go:130] >     },
	I0408 22:56:32.860854   21772 command_runner.go:130] >     {
	I0408 22:56:32.860860   21772 command_runner.go:130] >       "id": "a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc",
	I0408 22:56:32.860866   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.860871   21772 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.16-0"
	I0408 22:56:32.860878   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860882   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.860891   21772 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990",
	I0408 22:56:32.860905   21772 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"
	I0408 22:56:32.860911   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860915   21772 command_runner.go:130] >       "size": "151021823",
	I0408 22:56:32.860921   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.860925   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.860931   21772 command_runner.go:130] >       },
	I0408 22:56:32.860946   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.860953   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.860957   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.860962   21772 command_runner.go:130] >     },
	I0408 22:56:32.860965   21772 command_runner.go:130] >     {
	I0408 22:56:32.860971   21772 command_runner.go:130] >       "id": "85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef",
	I0408 22:56:32.860977   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.860982   21772 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.32.2"
	I0408 22:56:32.860985   21772 command_runner.go:130] >       ],
	I0408 22:56:32.860990   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.860997   21772 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d",
	I0408 22:56:32.861007   21772 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"
	I0408 22:56:32.861010   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861014   21772 command_runner.go:130] >       "size": "98055648",
	I0408 22:56:32.861024   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.861030   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.861033   21772 command_runner.go:130] >       },
	I0408 22:56:32.861037   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.861043   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.861046   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.861049   21772 command_runner.go:130] >     },
	I0408 22:56:32.861052   21772 command_runner.go:130] >     {
	I0408 22:56:32.861060   21772 command_runner.go:130] >       "id": "b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389",
	I0408 22:56:32.861064   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.861071   21772 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.32.2"
	I0408 22:56:32.861076   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861082   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.861090   21772 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5",
	I0408 22:56:32.861099   21772 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90"
	I0408 22:56:32.861103   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861106   21772 command_runner.go:130] >       "size": "90793286",
	I0408 22:56:32.861110   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.861114   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.861116   21772 command_runner.go:130] >       },
	I0408 22:56:32.861120   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.861126   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.861130   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.861133   21772 command_runner.go:130] >     },
	I0408 22:56:32.861136   21772 command_runner.go:130] >     {
	I0408 22:56:32.861143   21772 command_runner.go:130] >       "id": "f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5",
	I0408 22:56:32.861149   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.861153   21772 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.32.2"
	I0408 22:56:32.861158   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861162   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.861169   21772 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d",
	I0408 22:56:32.861178   21772 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d"
	I0408 22:56:32.861182   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861190   21772 command_runner.go:130] >       "size": "95271321",
	I0408 22:56:32.861196   21772 command_runner.go:130] >       "uid": null,
	I0408 22:56:32.861200   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.861204   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.861207   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.861210   21772 command_runner.go:130] >     },
	I0408 22:56:32.861213   21772 command_runner.go:130] >     {
	I0408 22:56:32.861219   21772 command_runner.go:130] >       "id": "d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d",
	I0408 22:56:32.861224   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.861229   21772 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.32.2"
	I0408 22:56:32.861234   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861238   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.861256   21772 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76",
	I0408 22:56:32.861266   21772 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b"
	I0408 22:56:32.861269   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861273   21772 command_runner.go:130] >       "size": "70653254",
	I0408 22:56:32.861275   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.861279   21772 command_runner.go:130] >         "value": "0"
	I0408 22:56:32.861282   21772 command_runner.go:130] >       },
	I0408 22:56:32.861286   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.861289   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.861293   21772 command_runner.go:130] >       "pinned": false
	I0408 22:56:32.861296   21772 command_runner.go:130] >     },
	I0408 22:56:32.861299   21772 command_runner.go:130] >     {
	I0408 22:56:32.861305   21772 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0408 22:56:32.861314   21772 command_runner.go:130] >       "repoTags": [
	I0408 22:56:32.861319   21772 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0408 22:56:32.861322   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861325   21772 command_runner.go:130] >       "repoDigests": [
	I0408 22:56:32.861332   21772 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0408 22:56:32.861341   21772 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0408 22:56:32.861345   21772 command_runner.go:130] >       ],
	I0408 22:56:32.861349   21772 command_runner.go:130] >       "size": "742080",
	I0408 22:56:32.861357   21772 command_runner.go:130] >       "uid": {
	I0408 22:56:32.861364   21772 command_runner.go:130] >         "value": "65535"
	I0408 22:56:32.861367   21772 command_runner.go:130] >       },
	I0408 22:56:32.861370   21772 command_runner.go:130] >       "username": "",
	I0408 22:56:32.861374   21772 command_runner.go:130] >       "spec": null,
	I0408 22:56:32.861380   21772 command_runner.go:130] >       "pinned": true
	I0408 22:56:32.861382   21772 command_runner.go:130] >     }
	I0408 22:56:32.861385   21772 command_runner.go:130] >   ]
	I0408 22:56:32.861388   21772 command_runner.go:130] > }
	I0408 22:56:32.862015   21772 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 22:56:32.862029   21772 cache_images.go:84] Images are preloaded, skipping loading
	I0408 22:56:32.862035   21772 kubeadm.go:934] updating node { 192.168.39.234 8441 v1.32.2 crio true true} ...
	I0408 22:56:32.862119   21772 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-546336 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.234
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:functional-546336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 22:56:32.862176   21772 ssh_runner.go:195] Run: crio config
	I0408 22:56:32.900028   21772 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0408 22:56:32.900049   21772 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0408 22:56:32.900055   21772 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0408 22:56:32.900058   21772 command_runner.go:130] > #
	I0408 22:56:32.900065   21772 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0408 22:56:32.900071   21772 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0408 22:56:32.900077   21772 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0408 22:56:32.900097   21772 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0408 22:56:32.900101   21772 command_runner.go:130] > # reload'.
	I0408 22:56:32.900107   21772 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0408 22:56:32.900113   21772 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0408 22:56:32.900120   21772 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0408 22:56:32.900130   21772 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0408 22:56:32.900135   21772 command_runner.go:130] > [crio]
	I0408 22:56:32.900144   21772 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0408 22:56:32.900152   21772 command_runner.go:130] > # containers images, in this directory.
	I0408 22:56:32.900158   21772 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0408 22:56:32.900171   21772 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0408 22:56:32.900182   21772 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0408 22:56:32.900190   21772 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0408 22:56:32.900199   21772 command_runner.go:130] > # imagestore = ""
	I0408 22:56:32.900205   21772 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0408 22:56:32.900213   21772 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0408 22:56:32.900221   21772 command_runner.go:130] > storage_driver = "overlay"
	I0408 22:56:32.900232   21772 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0408 22:56:32.900240   21772 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0408 22:56:32.900247   21772 command_runner.go:130] > storage_option = [
	I0408 22:56:32.900262   21772 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0408 22:56:32.900275   21772 command_runner.go:130] > ]
	I0408 22:56:32.900286   21772 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0408 22:56:32.900296   21772 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0408 22:56:32.900301   21772 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0408 22:56:32.900307   21772 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0408 22:56:32.900312   21772 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0408 22:56:32.900316   21772 command_runner.go:130] > # always happen on a node reboot
	I0408 22:56:32.900323   21772 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0408 22:56:32.900351   21772 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0408 22:56:32.900362   21772 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0408 22:56:32.900370   21772 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0408 22:56:32.900379   21772 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0408 22:56:32.900389   21772 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0408 22:56:32.900401   21772 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0408 22:56:32.900408   21772 command_runner.go:130] > # internal_wipe = true
	I0408 22:56:32.900421   21772 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0408 22:56:32.900433   21772 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0408 22:56:32.900445   21772 command_runner.go:130] > # internal_repair = false
	I0408 22:56:32.900456   21772 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0408 22:56:32.900465   21772 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0408 22:56:32.900477   21772 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0408 22:56:32.900488   21772 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0408 22:56:32.900500   21772 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0408 22:56:32.900506   21772 command_runner.go:130] > [crio.api]
	I0408 22:56:32.900514   21772 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0408 22:56:32.900524   21772 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0408 22:56:32.900532   21772 command_runner.go:130] > # IP address on which the stream server will listen.
	I0408 22:56:32.900539   21772 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0408 22:56:32.900549   21772 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0408 22:56:32.900559   21772 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0408 22:56:32.900565   21772 command_runner.go:130] > # stream_port = "0"
	I0408 22:56:32.900572   21772 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0408 22:56:32.900581   21772 command_runner.go:130] > # stream_enable_tls = false
	I0408 22:56:32.900589   21772 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0408 22:56:32.900593   21772 command_runner.go:130] > # stream_idle_timeout = ""
	I0408 22:56:32.900601   21772 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0408 22:56:32.900607   21772 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0408 22:56:32.900614   21772 command_runner.go:130] > # minutes.
	I0408 22:56:32.900620   21772 command_runner.go:130] > # stream_tls_cert = ""
	I0408 22:56:32.900631   21772 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0408 22:56:32.900649   21772 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0408 22:56:32.900658   21772 command_runner.go:130] > # stream_tls_key = ""
	I0408 22:56:32.900667   21772 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0408 22:56:32.900679   21772 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0408 22:56:32.900709   21772 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0408 22:56:32.900720   21772 command_runner.go:130] > # stream_tls_ca = ""
	I0408 22:56:32.900732   21772 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0408 22:56:32.900742   21772 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0408 22:56:32.900753   21772 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0408 22:56:32.900763   21772 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0408 22:56:32.900773   21772 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0408 22:56:32.900785   21772 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0408 22:56:32.900793   21772 command_runner.go:130] > [crio.runtime]
	I0408 22:56:32.900803   21772 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0408 22:56:32.900815   21772 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0408 22:56:32.900822   21772 command_runner.go:130] > # "nofile=1024:2048"
	I0408 22:56:32.900832   21772 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0408 22:56:32.900841   21772 command_runner.go:130] > # default_ulimits = [
	I0408 22:56:32.900847   21772 command_runner.go:130] > # ]
	I0408 22:56:32.900860   21772 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0408 22:56:32.900873   21772 command_runner.go:130] > # no_pivot = false
	I0408 22:56:32.900885   21772 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0408 22:56:32.900897   21772 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0408 22:56:32.900907   21772 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0408 22:56:32.900918   21772 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0408 22:56:32.900932   21772 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0408 22:56:32.900959   21772 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0408 22:56:32.900970   21772 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0408 22:56:32.900976   21772 command_runner.go:130] > # Cgroup setting for conmon
	I0408 22:56:32.900987   21772 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0408 22:56:32.900996   21772 command_runner.go:130] > conmon_cgroup = "pod"
	I0408 22:56:32.901006   21772 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0408 22:56:32.901017   21772 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0408 22:56:32.901030   21772 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0408 22:56:32.901038   21772 command_runner.go:130] > conmon_env = [
	I0408 22:56:32.901047   21772 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0408 22:56:32.901055   21772 command_runner.go:130] > ]
	I0408 22:56:32.901064   21772 command_runner.go:130] > # Additional environment variables to set for all the
	I0408 22:56:32.901075   21772 command_runner.go:130] > # containers. These are overridden if set in the
	I0408 22:56:32.901087   21772 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0408 22:56:32.901094   21772 command_runner.go:130] > # default_env = [
	I0408 22:56:32.901103   21772 command_runner.go:130] > # ]
	I0408 22:56:32.901111   21772 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0408 22:56:32.901125   21772 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0408 22:56:32.901134   21772 command_runner.go:130] > # selinux = false
	I0408 22:56:32.901143   21772 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0408 22:56:32.901155   21772 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0408 22:56:32.901167   21772 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0408 22:56:32.901177   21772 command_runner.go:130] > # seccomp_profile = ""
	I0408 22:56:32.901186   21772 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0408 22:56:32.901197   21772 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0408 22:56:32.901207   21772 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0408 22:56:32.901217   21772 command_runner.go:130] > # which might increase security.
	I0408 22:56:32.901225   21772 command_runner.go:130] > # This option is currently deprecated,
	I0408 22:56:32.901237   21772 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0408 22:56:32.901255   21772 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0408 22:56:32.901268   21772 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0408 22:56:32.901288   21772 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0408 22:56:32.901314   21772 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0408 22:56:32.901327   21772 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0408 22:56:32.901335   21772 command_runner.go:130] > # This option supports live configuration reload.
	I0408 22:56:32.901345   21772 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0408 22:56:32.901353   21772 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0408 22:56:32.901362   21772 command_runner.go:130] > # the cgroup blockio controller.
	I0408 22:56:32.901369   21772 command_runner.go:130] > # blockio_config_file = ""
	I0408 22:56:32.901382   21772 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0408 22:56:32.901388   21772 command_runner.go:130] > # blockio parameters.
	I0408 22:56:32.901397   21772 command_runner.go:130] > # blockio_reload = false
	I0408 22:56:32.901407   21772 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0408 22:56:32.901414   21772 command_runner.go:130] > # irqbalance daemon.
	I0408 22:56:32.901419   21772 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0408 22:56:32.901425   21772 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0408 22:56:32.901431   21772 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0408 22:56:32.901438   21772 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0408 22:56:32.901446   21772 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0408 22:56:32.901454   21772 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0408 22:56:32.901461   21772 command_runner.go:130] > # This option supports live configuration reload.
	I0408 22:56:32.901468   21772 command_runner.go:130] > # rdt_config_file = ""
	I0408 22:56:32.901476   21772 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0408 22:56:32.901483   21772 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0408 22:56:32.901522   21772 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0408 22:56:32.901531   21772 command_runner.go:130] > # separate_pull_cgroup = ""
	I0408 22:56:32.901538   21772 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0408 22:56:32.901549   21772 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0408 22:56:32.901555   21772 command_runner.go:130] > # will be added.
	I0408 22:56:32.901562   21772 command_runner.go:130] > # default_capabilities = [
	I0408 22:56:32.901571   21772 command_runner.go:130] > # 	"CHOWN",
	I0408 22:56:32.901577   21772 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0408 22:56:32.901585   21772 command_runner.go:130] > # 	"FSETID",
	I0408 22:56:32.901590   21772 command_runner.go:130] > # 	"FOWNER",
	I0408 22:56:32.901596   21772 command_runner.go:130] > # 	"SETGID",
	I0408 22:56:32.901609   21772 command_runner.go:130] > # 	"SETUID",
	I0408 22:56:32.901618   21772 command_runner.go:130] > # 	"SETPCAP",
	I0408 22:56:32.901622   21772 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0408 22:56:32.901628   21772 command_runner.go:130] > # 	"KILL",
	I0408 22:56:32.901632   21772 command_runner.go:130] > # ]
	I0408 22:56:32.901643   21772 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0408 22:56:32.901657   21772 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0408 22:56:32.901671   21772 command_runner.go:130] > # add_inheritable_capabilities = false
	I0408 22:56:32.901681   21772 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0408 22:56:32.901693   21772 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0408 22:56:32.901702   21772 command_runner.go:130] > default_sysctls = [
	I0408 22:56:32.901710   21772 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0408 22:56:32.901718   21772 command_runner.go:130] > ]
	I0408 22:56:32.901725   21772 command_runner.go:130] > # List of devices on the host that a
	I0408 22:56:32.901738   21772 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0408 22:56:32.901744   21772 command_runner.go:130] > # allowed_devices = [
	I0408 22:56:32.901753   21772 command_runner.go:130] > # 	"/dev/fuse",
	I0408 22:56:32.901759   21772 command_runner.go:130] > # ]
	I0408 22:56:32.901768   21772 command_runner.go:130] > # List of additional devices. specified as
	I0408 22:56:32.901782   21772 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0408 22:56:32.901793   21772 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0408 22:56:32.901802   21772 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0408 22:56:32.901811   21772 command_runner.go:130] > # additional_devices = [
	I0408 22:56:32.901816   21772 command_runner.go:130] > # ]
	I0408 22:56:32.901827   21772 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0408 22:56:32.901834   21772 command_runner.go:130] > # cdi_spec_dirs = [
	I0408 22:56:32.901842   21772 command_runner.go:130] > # 	"/etc/cdi",
	I0408 22:56:32.901848   21772 command_runner.go:130] > # 	"/var/run/cdi",
	I0408 22:56:32.901856   21772 command_runner.go:130] > # ]
	I0408 22:56:32.901866   21772 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0408 22:56:32.901878   21772 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0408 22:56:32.901885   21772 command_runner.go:130] > # Defaults to false.
	I0408 22:56:32.901891   21772 command_runner.go:130] > # device_ownership_from_security_context = false
	I0408 22:56:32.901909   21772 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0408 22:56:32.901922   21772 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0408 22:56:32.901928   21772 command_runner.go:130] > # hooks_dir = [
	I0408 22:56:32.901936   21772 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0408 22:56:32.901950   21772 command_runner.go:130] > # ]
	I0408 22:56:32.901959   21772 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0408 22:56:32.901970   21772 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0408 22:56:32.901979   21772 command_runner.go:130] > # its default mounts from the following two files:
	I0408 22:56:32.901990   21772 command_runner.go:130] > #
	I0408 22:56:32.902004   21772 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0408 22:56:32.902015   21772 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0408 22:56:32.902024   21772 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0408 22:56:32.902033   21772 command_runner.go:130] > #
	I0408 22:56:32.902042   21772 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0408 22:56:32.902054   21772 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0408 22:56:32.902067   21772 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0408 22:56:32.902078   21772 command_runner.go:130] > #      only add mounts it finds in this file.
	I0408 22:56:32.902083   21772 command_runner.go:130] > #
	I0408 22:56:32.902092   21772 command_runner.go:130] > # default_mounts_file = ""
	I0408 22:56:32.902103   21772 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0408 22:56:32.902115   21772 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0408 22:56:32.902125   21772 command_runner.go:130] > pids_limit = 1024
	I0408 22:56:32.902135   21772 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0408 22:56:32.902144   21772 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0408 22:56:32.902151   21772 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0408 22:56:32.902166   21772 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0408 22:56:32.902177   21772 command_runner.go:130] > # log_size_max = -1
	I0408 22:56:32.902187   21772 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0408 22:56:32.902194   21772 command_runner.go:130] > # log_to_journald = false
	I0408 22:56:32.902206   21772 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0408 22:56:32.902216   21772 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0408 22:56:32.902224   21772 command_runner.go:130] > # Path to directory for container attach sockets.
	I0408 22:56:32.902234   21772 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0408 22:56:32.902254   21772 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0408 22:56:32.902264   21772 command_runner.go:130] > # bind_mount_prefix = ""
	I0408 22:56:32.902272   21772 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0408 22:56:32.902281   21772 command_runner.go:130] > # read_only = false
	I0408 22:56:32.902290   21772 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0408 22:56:32.902303   21772 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0408 22:56:32.902311   21772 command_runner.go:130] > # live configuration reload.
	I0408 22:56:32.902315   21772 command_runner.go:130] > # log_level = "info"
	I0408 22:56:32.902325   21772 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0408 22:56:32.902334   21772 command_runner.go:130] > # This option supports live configuration reload.
	I0408 22:56:32.902343   21772 command_runner.go:130] > # log_filter = ""
	I0408 22:56:32.902352   21772 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0408 22:56:32.902366   21772 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0408 22:56:32.902373   21772 command_runner.go:130] > # separated by comma.
	I0408 22:56:32.902387   21772 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0408 22:56:32.902396   21772 command_runner.go:130] > # uid_mappings = ""
	I0408 22:56:32.902405   21772 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0408 22:56:32.902417   21772 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0408 22:56:32.902427   21772 command_runner.go:130] > # separated by comma.
	I0408 22:56:32.902442   21772 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0408 22:56:32.902450   21772 command_runner.go:130] > # gid_mappings = ""
	I0408 22:56:32.902459   21772 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0408 22:56:32.902472   21772 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0408 22:56:32.902481   21772 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0408 22:56:32.902489   21772 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0408 22:56:32.902499   21772 command_runner.go:130] > # minimum_mappable_uid = -1
	I0408 22:56:32.902508   21772 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0408 22:56:32.902521   21772 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0408 22:56:32.902533   21772 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0408 22:56:32.902545   21772 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0408 22:56:32.902554   21772 command_runner.go:130] > # minimum_mappable_gid = -1
	I0408 22:56:32.902563   21772 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0408 22:56:32.902571   21772 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0408 22:56:32.902584   21772 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0408 22:56:32.902595   21772 command_runner.go:130] > # ctr_stop_timeout = 30
	I0408 22:56:32.902608   21772 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0408 22:56:32.902619   21772 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0408 22:56:32.902629   21772 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0408 22:56:32.902637   21772 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0408 22:56:32.902646   21772 command_runner.go:130] > drop_infra_ctr = false
	I0408 22:56:32.902653   21772 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0408 22:56:32.902661   21772 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0408 22:56:32.902672   21772 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0408 22:56:32.902683   21772 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0408 22:56:32.902696   21772 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0408 22:56:32.902708   21772 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0408 22:56:32.902719   21772 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0408 22:56:32.902730   21772 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0408 22:56:32.902735   21772 command_runner.go:130] > # shared_cpuset = ""
	I0408 22:56:32.902740   21772 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0408 22:56:32.902747   21772 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0408 22:56:32.902753   21772 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0408 22:56:32.902767   21772 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0408 22:56:32.902777   21772 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0408 22:56:32.902789   21772 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0408 22:56:32.902801   21772 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0408 22:56:32.902811   21772 command_runner.go:130] > # enable_criu_support = false
	I0408 22:56:32.902820   21772 command_runner.go:130] > # Enable/disable the generation of the container,
	I0408 22:56:32.902826   21772 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0408 22:56:32.902834   21772 command_runner.go:130] > # enable_pod_events = false
	I0408 22:56:32.902844   21772 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0408 22:56:32.902857   21772 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0408 22:56:32.902867   21772 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0408 22:56:32.902873   21772 command_runner.go:130] > # default_runtime = "runc"
	I0408 22:56:32.902884   21772 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0408 22:56:32.902897   21772 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0408 22:56:32.902917   21772 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0408 22:56:32.902928   21772 command_runner.go:130] > # creation as a file is not desired either.
	I0408 22:56:32.902945   21772 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0408 22:56:32.902956   21772 command_runner.go:130] > # the hostname is being managed dynamically.
	I0408 22:56:32.902962   21772 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0408 22:56:32.902970   21772 command_runner.go:130] > # ]
	I0408 22:56:32.902983   21772 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0408 22:56:32.902993   21772 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0408 22:56:32.903002   21772 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0408 22:56:32.903013   21772 command_runner.go:130] > # Each entry in the table should follow the format:
	I0408 22:56:32.903022   21772 command_runner.go:130] > #
	I0408 22:56:32.903029   21772 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0408 22:56:32.903039   21772 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0408 22:56:32.903114   21772 command_runner.go:130] > # runtime_type = "oci"
	I0408 22:56:32.903129   21772 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0408 22:56:32.903136   21772 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0408 22:56:32.903142   21772 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0408 22:56:32.903150   21772 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0408 22:56:32.903156   21772 command_runner.go:130] > # monitor_env = []
	I0408 22:56:32.903164   21772 command_runner.go:130] > # privileged_without_host_devices = false
	I0408 22:56:32.903171   21772 command_runner.go:130] > # allowed_annotations = []
	I0408 22:56:32.903177   21772 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0408 22:56:32.903186   21772 command_runner.go:130] > # Where:
	I0408 22:56:32.903195   21772 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0408 22:56:32.903207   21772 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0408 22:56:32.903220   21772 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0408 22:56:32.903235   21772 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0408 22:56:32.903243   21772 command_runner.go:130] > #   in $PATH.
	I0408 22:56:32.903253   21772 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0408 22:56:32.903260   21772 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0408 22:56:32.903267   21772 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0408 22:56:32.903275   21772 command_runner.go:130] > #   state.
	I0408 22:56:32.903291   21772 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0408 22:56:32.903308   21772 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0408 22:56:32.903321   21772 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0408 22:56:32.903329   21772 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0408 22:56:32.903340   21772 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0408 22:56:32.903348   21772 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0408 22:56:32.903355   21772 command_runner.go:130] > #   The currently recognized values are:
	I0408 22:56:32.903368   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0408 22:56:32.903382   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0408 22:56:32.903394   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0408 22:56:32.903404   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0408 22:56:32.903418   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0408 22:56:32.903429   21772 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0408 22:56:32.903443   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0408 22:56:32.903456   21772 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0408 22:56:32.903467   21772 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0408 22:56:32.903479   21772 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0408 22:56:32.903489   21772 command_runner.go:130] > #   deprecated option "conmon".
	I0408 22:56:32.903501   21772 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0408 22:56:32.903513   21772 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0408 22:56:32.903527   21772 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0408 22:56:32.903538   21772 command_runner.go:130] > #   should be moved to the container's cgroup
	I0408 22:56:32.903548   21772 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0408 22:56:32.903557   21772 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0408 22:56:32.903568   21772 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0408 22:56:32.903577   21772 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0408 22:56:32.903580   21772 command_runner.go:130] > #
	I0408 22:56:32.903588   21772 command_runner.go:130] > # Using the seccomp notifier feature:
	I0408 22:56:32.903595   21772 command_runner.go:130] > #
	I0408 22:56:32.903604   21772 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0408 22:56:32.903618   21772 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0408 22:56:32.903622   21772 command_runner.go:130] > #
	I0408 22:56:32.903632   21772 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0408 22:56:32.903644   21772 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0408 22:56:32.903657   21772 command_runner.go:130] > #
	I0408 22:56:32.903669   21772 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0408 22:56:32.903678   21772 command_runner.go:130] > # feature.
	I0408 22:56:32.903682   21772 command_runner.go:130] > #
	I0408 22:56:32.903694   21772 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0408 22:56:32.903706   21772 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0408 22:56:32.903718   21772 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0408 22:56:32.903728   21772 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0408 22:56:32.903739   21772 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0408 22:56:32.903747   21772 command_runner.go:130] > #
	I0408 22:56:32.903756   21772 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0408 22:56:32.903766   21772 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0408 22:56:32.903769   21772 command_runner.go:130] > #
	I0408 22:56:32.903777   21772 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0408 22:56:32.903789   21772 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0408 22:56:32.903797   21772 command_runner.go:130] > #
	I0408 22:56:32.903805   21772 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0408 22:56:32.903816   21772 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0408 22:56:32.903828   21772 command_runner.go:130] > # limitation.
	I0408 22:56:32.903839   21772 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0408 22:56:32.903846   21772 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0408 22:56:32.903850   21772 command_runner.go:130] > runtime_type = "oci"
	I0408 22:56:32.903854   21772 command_runner.go:130] > runtime_root = "/run/runc"
	I0408 22:56:32.903860   21772 command_runner.go:130] > runtime_config_path = ""
	I0408 22:56:32.903881   21772 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0408 22:56:32.903890   21772 command_runner.go:130] > monitor_cgroup = "pod"
	I0408 22:56:32.903896   21772 command_runner.go:130] > monitor_exec_cgroup = ""
	I0408 22:56:32.903905   21772 command_runner.go:130] > monitor_env = [
	I0408 22:56:32.903914   21772 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0408 22:56:32.903922   21772 command_runner.go:130] > ]
	I0408 22:56:32.903929   21772 command_runner.go:130] > privileged_without_host_devices = false
	I0408 22:56:32.903943   21772 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0408 22:56:32.903954   21772 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0408 22:56:32.903974   21772 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0408 22:56:32.903992   21772 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0408 22:56:32.904007   21772 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0408 22:56:32.904018   21772 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0408 22:56:32.904031   21772 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0408 22:56:32.904046   21772 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0408 22:56:32.904059   21772 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0408 22:56:32.904070   21772 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0408 22:56:32.904078   21772 command_runner.go:130] > # Example:
	I0408 22:56:32.904085   21772 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0408 22:56:32.904096   21772 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0408 22:56:32.904104   21772 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0408 22:56:32.904109   21772 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0408 22:56:32.904116   21772 command_runner.go:130] > # cpuset = 0
	I0408 22:56:32.904122   21772 command_runner.go:130] > # cpushares = "0-1"
	I0408 22:56:32.904131   21772 command_runner.go:130] > # Where:
	I0408 22:56:32.904138   21772 command_runner.go:130] > # The workload name is workload-type.
	I0408 22:56:32.904151   21772 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0408 22:56:32.904162   21772 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0408 22:56:32.904171   21772 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0408 22:56:32.904185   21772 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0408 22:56:32.904195   21772 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0408 22:56:32.904202   21772 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0408 22:56:32.904216   21772 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0408 22:56:32.904226   21772 command_runner.go:130] > # Default value is set to true
	I0408 22:56:32.904232   21772 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0408 22:56:32.904244   21772 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0408 22:56:32.904253   21772 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0408 22:56:32.904260   21772 command_runner.go:130] > # Default value is set to 'false'
	I0408 22:56:32.904267   21772 command_runner.go:130] > # disable_hostport_mapping = false
	I0408 22:56:32.904275   21772 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0408 22:56:32.904280   21772 command_runner.go:130] > #
	I0408 22:56:32.904288   21772 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0408 22:56:32.904307   21772 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0408 22:56:32.904322   21772 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0408 22:56:32.904335   21772 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0408 22:56:32.904349   21772 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0408 22:56:32.904357   21772 command_runner.go:130] > [crio.image]
	I0408 22:56:32.904363   21772 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0408 22:56:32.904371   21772 command_runner.go:130] > # default_transport = "docker://"
	I0408 22:56:32.904382   21772 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0408 22:56:32.904394   21772 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0408 22:56:32.904404   21772 command_runner.go:130] > # global_auth_file = ""
	I0408 22:56:32.904411   21772 command_runner.go:130] > # The image used to instantiate infra containers.
	I0408 22:56:32.904421   21772 command_runner.go:130] > # This option supports live configuration reload.
	I0408 22:56:32.904431   21772 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0408 22:56:32.904441   21772 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0408 22:56:32.904449   21772 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0408 22:56:32.904454   21772 command_runner.go:130] > # This option supports live configuration reload.
	I0408 22:56:32.904459   21772 command_runner.go:130] > # pause_image_auth_file = ""
	I0408 22:56:32.904464   21772 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0408 22:56:32.904472   21772 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0408 22:56:32.904481   21772 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0408 22:56:32.904494   21772 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0408 22:56:32.904503   21772 command_runner.go:130] > # pause_command = "/pause"
	I0408 22:56:32.904511   21772 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0408 22:56:32.904551   21772 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0408 22:56:32.904556   21772 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0408 22:56:32.904564   21772 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0408 22:56:32.904569   21772 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0408 22:56:32.904578   21772 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0408 22:56:32.904584   21772 command_runner.go:130] > # pinned_images = [
	I0408 22:56:32.904592   21772 command_runner.go:130] > # ]
	I0408 22:56:32.904600   21772 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0408 22:56:32.904607   21772 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0408 22:56:32.904615   21772 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0408 22:56:32.904629   21772 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0408 22:56:32.904642   21772 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0408 22:56:32.904651   21772 command_runner.go:130] > # signature_policy = ""
	I0408 22:56:32.904660   21772 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0408 22:56:32.904672   21772 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0408 22:56:32.904681   21772 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0408 22:56:32.904694   21772 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0408 22:56:32.904702   21772 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0408 22:56:32.904707   21772 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0408 22:56:32.904714   21772 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0408 22:56:32.904720   21772 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0408 22:56:32.904723   21772 command_runner.go:130] > # changing them here.
	I0408 22:56:32.904726   21772 command_runner.go:130] > # insecure_registries = [
	I0408 22:56:32.904729   21772 command_runner.go:130] > # ]
	I0408 22:56:32.904735   21772 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0408 22:56:32.904739   21772 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0408 22:56:32.904743   21772 command_runner.go:130] > # image_volumes = "mkdir"
	I0408 22:56:32.904747   21772 command_runner.go:130] > # Temporary directory to use for storing big files
	I0408 22:56:32.904751   21772 command_runner.go:130] > # big_files_temporary_dir = ""
	I0408 22:56:32.904756   21772 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0408 22:56:32.904760   21772 command_runner.go:130] > # CNI plugins.
	I0408 22:56:32.904763   21772 command_runner.go:130] > [crio.network]
	I0408 22:56:32.904768   21772 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0408 22:56:32.904773   21772 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0408 22:56:32.904777   21772 command_runner.go:130] > # cni_default_network = ""
	I0408 22:56:32.904782   21772 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0408 22:56:32.904786   21772 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0408 22:56:32.904791   21772 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0408 22:56:32.904794   21772 command_runner.go:130] > # plugin_dirs = [
	I0408 22:56:32.904798   21772 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0408 22:56:32.904800   21772 command_runner.go:130] > # ]
	I0408 22:56:32.904805   21772 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0408 22:56:32.904809   21772 command_runner.go:130] > [crio.metrics]
	I0408 22:56:32.904818   21772 command_runner.go:130] > # Globally enable or disable metrics support.
	I0408 22:56:32.904821   21772 command_runner.go:130] > enable_metrics = true
	I0408 22:56:32.904825   21772 command_runner.go:130] > # Specify enabled metrics collectors.
	I0408 22:56:32.904829   21772 command_runner.go:130] > # Per default all metrics are enabled.
	I0408 22:56:32.904834   21772 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0408 22:56:32.904840   21772 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0408 22:56:32.904847   21772 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0408 22:56:32.904853   21772 command_runner.go:130] > # metrics_collectors = [
	I0408 22:56:32.904859   21772 command_runner.go:130] > # 	"operations",
	I0408 22:56:32.904866   21772 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0408 22:56:32.904871   21772 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0408 22:56:32.904875   21772 command_runner.go:130] > # 	"operations_errors",
	I0408 22:56:32.904879   21772 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0408 22:56:32.904882   21772 command_runner.go:130] > # 	"image_pulls_by_name",
	I0408 22:56:32.904888   21772 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0408 22:56:32.904892   21772 command_runner.go:130] > # 	"image_pulls_failures",
	I0408 22:56:32.904895   21772 command_runner.go:130] > # 	"image_pulls_successes",
	I0408 22:56:32.904899   21772 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0408 22:56:32.904903   21772 command_runner.go:130] > # 	"image_layer_reuse",
	I0408 22:56:32.904907   21772 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0408 22:56:32.904911   21772 command_runner.go:130] > # 	"containers_oom_total",
	I0408 22:56:32.904915   21772 command_runner.go:130] > # 	"containers_oom",
	I0408 22:56:32.904918   21772 command_runner.go:130] > # 	"processes_defunct",
	I0408 22:56:32.904922   21772 command_runner.go:130] > # 	"operations_total",
	I0408 22:56:32.904929   21772 command_runner.go:130] > # 	"operations_latency_seconds",
	I0408 22:56:32.904933   21772 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0408 22:56:32.904937   21772 command_runner.go:130] > # 	"operations_errors_total",
	I0408 22:56:32.904947   21772 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0408 22:56:32.904955   21772 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0408 22:56:32.904959   21772 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0408 22:56:32.904963   21772 command_runner.go:130] > # 	"image_pulls_success_total",
	I0408 22:56:32.904967   21772 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0408 22:56:32.904971   21772 command_runner.go:130] > # 	"containers_oom_count_total",
	I0408 22:56:32.904981   21772 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0408 22:56:32.904988   21772 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0408 22:56:32.904991   21772 command_runner.go:130] > # ]
	I0408 22:56:32.905000   21772 command_runner.go:130] > # The port on which the metrics server will listen.
	I0408 22:56:32.905006   21772 command_runner.go:130] > # metrics_port = 9090
	I0408 22:56:32.905011   21772 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0408 22:56:32.905014   21772 command_runner.go:130] > # metrics_socket = ""
	I0408 22:56:32.905019   21772 command_runner.go:130] > # The certificate for the secure metrics server.
	I0408 22:56:32.905024   21772 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0408 22:56:32.905033   21772 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0408 22:56:32.905037   21772 command_runner.go:130] > # certificate on any modification event.
	I0408 22:56:32.905043   21772 command_runner.go:130] > # metrics_cert = ""
	I0408 22:56:32.905048   21772 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0408 22:56:32.905052   21772 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0408 22:56:32.905058   21772 command_runner.go:130] > # metrics_key = ""
	I0408 22:56:32.905064   21772 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0408 22:56:32.905070   21772 command_runner.go:130] > [crio.tracing]
	I0408 22:56:32.905075   21772 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0408 22:56:32.905079   21772 command_runner.go:130] > # enable_tracing = false
	I0408 22:56:32.905087   21772 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0408 22:56:32.905091   21772 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0408 22:56:32.905097   21772 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0408 22:56:32.905104   21772 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0408 22:56:32.905108   21772 command_runner.go:130] > # CRI-O NRI configuration.
	I0408 22:56:32.905113   21772 command_runner.go:130] > [crio.nri]
	I0408 22:56:32.905117   21772 command_runner.go:130] > # Globally enable or disable NRI.
	I0408 22:56:32.905125   21772 command_runner.go:130] > # enable_nri = false
	I0408 22:56:32.905129   21772 command_runner.go:130] > # NRI socket to listen on.
	I0408 22:56:32.905136   21772 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0408 22:56:32.905139   21772 command_runner.go:130] > # NRI plugin directory to use.
	I0408 22:56:32.905144   21772 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0408 22:56:32.905148   21772 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0408 22:56:32.905155   21772 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0408 22:56:32.905164   21772 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0408 22:56:32.905171   21772 command_runner.go:130] > # nri_disable_connections = false
	I0408 22:56:32.905175   21772 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0408 22:56:32.905182   21772 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0408 22:56:32.905186   21772 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0408 22:56:32.905193   21772 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0408 22:56:32.905199   21772 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0408 22:56:32.905204   21772 command_runner.go:130] > [crio.stats]
	I0408 22:56:32.905210   21772 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0408 22:56:32.905217   21772 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0408 22:56:32.905223   21772 command_runner.go:130] > # stats_collection_period = 0
	I0408 22:56:32.905256   21772 command_runner.go:130] ! time="2025-04-08 22:56:32.868436253Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0408 22:56:32.905274   21772 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0408 22:56:32.905342   21772 cni.go:84] Creating CNI manager for ""
	I0408 22:56:32.905354   21772 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 22:56:32.905364   21772 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 22:56:32.905388   21772 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.234 APIServerPort:8441 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-546336 NodeName:functional-546336 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.234"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.234 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 22:56:32.905493   21772 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.234
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-546336"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.234"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.234"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 22:56:32.905580   21772 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0408 22:56:32.914548   21772 command_runner.go:130] > kubeadm
	I0408 22:56:32.914564   21772 command_runner.go:130] > kubectl
	I0408 22:56:32.914568   21772 command_runner.go:130] > kubelet
	I0408 22:56:32.914646   21772 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 22:56:32.914718   21772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 22:56:32.923150   21772 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0408 22:56:32.938212   21772 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 22:56:32.953395   21772 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I0408 22:56:32.968282   21772 ssh_runner.go:195] Run: grep 192.168.39.234	control-plane.minikube.internal$ /etc/hosts
	I0408 22:56:32.971857   21772 command_runner.go:130] > 192.168.39.234	control-plane.minikube.internal
	I0408 22:56:32.971923   21772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 22:56:33.097315   21772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 22:56:33.112048   21772 certs.go:68] Setting up /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336 for IP: 192.168.39.234
	I0408 22:56:33.112066   21772 certs.go:194] generating shared ca certs ...
	I0408 22:56:33.112083   21772 certs.go:226] acquiring lock for ca certs: {Name:mk0d455aae85017ac942481bbc1202ccedea144f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 22:56:33.112251   21772 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.key
	I0408 22:56:33.112294   21772 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.key
	I0408 22:56:33.112308   21772 certs.go:256] generating profile certs ...
	I0408 22:56:33.112383   21772 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/client.key
	I0408 22:56:33.112451   21772 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/apiserver.key.848fae18
	I0408 22:56:33.112486   21772 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/proxy-client.key
	I0408 22:56:33.112495   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0408 22:56:33.112506   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0408 22:56:33.112517   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0408 22:56:33.112526   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0408 22:56:33.112540   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0408 22:56:33.112552   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0408 22:56:33.112561   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0408 22:56:33.112572   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0408 22:56:33.112624   21772 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314.pem (1338 bytes)
	W0408 22:56:33.112665   21772 certs.go:480] ignoring /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314_empty.pem, impossibly tiny 0 bytes
	I0408 22:56:33.112678   21772 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 22:56:33.112704   21772 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem (1082 bytes)
	I0408 22:56:33.112735   21772 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem (1123 bytes)
	I0408 22:56:33.112774   21772 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem (1675 bytes)
	I0408 22:56:33.112819   21772 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem (1708 bytes)
	I0408 22:56:33.112860   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem -> /usr/share/ca-certificates/163142.pem
	I0408 22:56:33.112879   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0408 22:56:33.112897   21772 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314.pem -> /usr/share/ca-certificates/16314.pem
	I0408 22:56:33.113475   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 22:56:33.137877   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 22:56:33.159070   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 22:56:33.185298   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0408 22:56:33.207770   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0408 22:56:33.228856   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0408 22:56:33.251027   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 22:56:33.272315   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 22:56:33.294625   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem --> /usr/share/ca-certificates/163142.pem (1708 bytes)
	I0408 22:56:33.316217   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 22:56:33.337786   21772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314.pem --> /usr/share/ca-certificates/16314.pem (1338 bytes)
	I0408 22:56:33.358722   21772 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0408 22:56:33.373131   21772 ssh_runner.go:195] Run: openssl version
	I0408 22:56:33.378702   21772 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0408 22:56:33.378755   21772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163142.pem && ln -fs /usr/share/ca-certificates/163142.pem /etc/ssl/certs/163142.pem"
	I0408 22:56:33.388262   21772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163142.pem
	I0408 22:56:33.392059   21772 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr  8 22:53 /usr/share/ca-certificates/163142.pem
	I0408 22:56:33.392090   21772 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 22:53 /usr/share/ca-certificates/163142.pem
	I0408 22:56:33.392135   21772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163142.pem
	I0408 22:56:33.397236   21772 command_runner.go:130] > 3ec20f2e
	I0408 22:56:33.397295   21772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163142.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 22:56:33.405382   21772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 22:56:33.414578   21772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 22:56:33.418346   21772 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr  8 22:46 /usr/share/ca-certificates/minikubeCA.pem
	I0408 22:56:33.418448   21772 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 22:46 /usr/share/ca-certificates/minikubeCA.pem
	I0408 22:56:33.418490   21772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 22:56:33.423400   21772 command_runner.go:130] > b5213941
	I0408 22:56:33.423452   21772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 22:56:33.431557   21772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16314.pem && ln -fs /usr/share/ca-certificates/16314.pem /etc/ssl/certs/16314.pem"
	I0408 22:56:33.442046   21772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16314.pem
	I0408 22:56:33.446095   21772 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr  8 22:53 /usr/share/ca-certificates/16314.pem
	I0408 22:56:33.446156   21772 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 22:53 /usr/share/ca-certificates/16314.pem
	I0408 22:56:33.446198   21772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16314.pem
	I0408 22:56:33.451257   21772 command_runner.go:130] > 51391683
	I0408 22:56:33.451490   21772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16314.pem /etc/ssl/certs/51391683.0"
	I0408 22:56:33.460149   21772 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 22:56:33.463927   21772 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 22:56:33.463942   21772 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0408 22:56:33.463948   21772 command_runner.go:130] > Device: 253,1	Inode: 7338542     Links: 1
	I0408 22:56:33.463973   21772 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0408 22:56:33.463986   21772 command_runner.go:130] > Access: 2025-04-08 22:54:04.578144403 +0000
	I0408 22:56:33.463994   21772 command_runner.go:130] > Modify: 2025-04-08 22:54:04.578144403 +0000
	I0408 22:56:33.464003   21772 command_runner.go:130] > Change: 2025-04-08 22:54:04.578144403 +0000
	I0408 22:56:33.464008   21772 command_runner.go:130] >  Birth: 2025-04-08 22:54:04.578144403 +0000
	I0408 22:56:33.464063   21772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 22:56:33.469050   21772 command_runner.go:130] > Certificate will not expire
	I0408 22:56:33.469263   21772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 22:56:33.474068   21772 command_runner.go:130] > Certificate will not expire
	I0408 22:56:33.474186   21772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 22:56:33.478955   21772 command_runner.go:130] > Certificate will not expire
	I0408 22:56:33.479120   21772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 22:56:33.484075   21772 command_runner.go:130] > Certificate will not expire
	I0408 22:56:33.484130   21772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 22:56:33.488910   21772 command_runner.go:130] > Certificate will not expire
	I0408 22:56:33.488951   21772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 22:56:33.493716   21772 command_runner.go:130] > Certificate will not expire
	I0408 22:56:33.493900   21772 kubeadm.go:392] StartCluster: {Name:functional-546336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-5463
36 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 22:56:33.493993   21772 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 22:56:33.494051   21772 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 22:56:33.531075   21772 command_runner.go:130] > f5383886ea44f313131ed72cfa49949bd8b5d2f4873b4e32d81e980ce2940fd0
	I0408 22:56:33.531123   21772 command_runner.go:130] > c7dc125c272a5b82818d376a354500ce1464ae56dbc755c0a0779f8c284bfc5c
	I0408 22:56:33.531134   21772 command_runner.go:130] > 0b6b87627794e032e13a615d5ea5b991ffef3843bb9ae6e9f153041eb733d782
	I0408 22:56:33.531145   21772 command_runner.go:130] > d19ba2e208f70c1259cbd17e3566fbc44b8b4d173fc3026591fa8cea6fad11ec
	I0408 22:56:33.531154   21772 command_runner.go:130] > a02e7488bb5d2cf6fe89c9af4932fa61408d59b64f5415e68dd23aef1e5f092c
	I0408 22:56:33.531170   21772 command_runner.go:130] > e50303177ff5685821e81bebd1db71a63709a33fc7ff89cb111d923979605c70
	I0408 22:56:33.531180   21772 command_runner.go:130] > d31c5cb795e76e9631c155d0b7e96672025f714ef26b9972bd87e759a40f7e4d
	I0408 22:56:33.531194   21772 command_runner.go:130] > 090c0b802b3a3a27f9446836815daacc48a4cfa1ed0b54043325e0eada99d664
	I0408 22:56:33.531207   21772 command_runner.go:130] > f5685f897beb5418ec57fb5f80ab70b0ffe4b406ecf635a6249b528e65cabfc4
	I0408 22:56:33.531221   21772 command_runner.go:130] > 31aa14fb57438b0d736b005aed16b4fb5438a2d6ce4af59f042b06d0271bcaa5
	I0408 22:56:33.531245   21772 cri.go:89] found id: "f5383886ea44f313131ed72cfa49949bd8b5d2f4873b4e32d81e980ce2940fd0"
	I0408 22:56:33.531257   21772 cri.go:89] found id: "c7dc125c272a5b82818d376a354500ce1464ae56dbc755c0a0779f8c284bfc5c"
	I0408 22:56:33.531266   21772 cri.go:89] found id: "0b6b87627794e032e13a615d5ea5b991ffef3843bb9ae6e9f153041eb733d782"
	I0408 22:56:33.531275   21772 cri.go:89] found id: "d19ba2e208f70c1259cbd17e3566fbc44b8b4d173fc3026591fa8cea6fad11ec"
	I0408 22:56:33.531284   21772 cri.go:89] found id: "a02e7488bb5d2cf6fe89c9af4932fa61408d59b64f5415e68dd23aef1e5f092c"
	I0408 22:56:33.531294   21772 cri.go:89] found id: "e50303177ff5685821e81bebd1db71a63709a33fc7ff89cb111d923979605c70"
	I0408 22:56:33.531302   21772 cri.go:89] found id: "d31c5cb795e76e9631c155d0b7e96672025f714ef26b9972bd87e759a40f7e4d"
	I0408 22:56:33.531308   21772 cri.go:89] found id: "090c0b802b3a3a27f9446836815daacc48a4cfa1ed0b54043325e0eada99d664"
	I0408 22:56:33.531312   21772 cri.go:89] found id: "f5685f897beb5418ec57fb5f80ab70b0ffe4b406ecf635a6249b528e65cabfc4"
	I0408 22:56:33.531318   21772 cri.go:89] found id: "31aa14fb57438b0d736b005aed16b4fb5438a2d6ce4af59f042b06d0271bcaa5"
	I0408 22:56:33.531323   21772 cri.go:89] found id: ""
	I0408 22:56:33.531374   21772 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-546336 -n functional-546336
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-546336 -n functional-546336: exit status 2 (216.181214ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-546336" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (336.41s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (135.44s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-546336 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0408 23:32:57.936201   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-546336 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: signal: killed (2m13.808297602s)

                                                
                                                
-- stdout --
	* [functional-546336] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20501
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20501-9125/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20501-9125/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "functional-546336" primary control-plane node in "functional-546336" cluster
	* Updating the running kvm2 "functional-546336" VM ...
	* Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	* Configuring bridge CNI (Container Networking Interface) ...

                                                
                                                
-- /stdout --
functional_test.go:776: failed to restart minikube. args "out/minikube-linux-amd64 start -p functional-546336 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": signal: killed
functional_test.go:778: restart took 2m13.808496404s for "functional-546336" cluster.
I0408 23:33:33.533402   16314 config.go:182] Loaded profile config "functional-546336": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-546336 -n functional-546336
helpers_test.go:244: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-546336 logs -n 25
helpers_test.go:252: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| unpause | nospam-715453 --log_dir                                                  | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-715453 --log_dir                                                  | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-715453 --log_dir                                                  | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 unpause                                               |                   |         |         |                     |                     |
	| stop    | nospam-715453 --log_dir                                                  | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-715453 --log_dir                                                  | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-715453 --log_dir                                                  | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	|         | /tmp/nospam-715453 stop                                                  |                   |         |         |                     |                     |
	| delete  | -p nospam-715453                                                         | nospam-715453     | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:53 UTC |
	| start   | -p functional-546336                                                     | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:54 UTC |
	|         | --memory=4000                                                            |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                                                 |                   |         |         |                     |                     |
	|         | --container-runtime=crio                                                 |                   |         |         |                     |                     |
	| start   | -p functional-546336                                                     | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 22:54 UTC |                     |
	|         | --alsologtostderr -v=8                                                   |                   |         |         |                     |                     |
	| cache   | functional-546336 cache add                                              | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:19 UTC | 08 Apr 25 23:20 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | functional-546336 cache add                                              | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | functional-546336 cache add                                              | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-546336 cache add                                              | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	|         | minikube-local-cache-test:functional-546336                              |                   |         |         |                     |                     |
	| cache   | functional-546336 cache delete                                           | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	|         | minikube-local-cache-test:functional-546336                              |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | list                                                                     | minikube          | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	| ssh     | functional-546336 ssh sudo                                               | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-546336                                                        | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	|         | ssh sudo crictl rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-546336 ssh                                                    | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-546336 cache reload                                           | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	| ssh     | functional-546336 ssh                                                    | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC | 08 Apr 25 23:20 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-546336 kubectl --                                             | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:20 UTC |                     |
	|         | --context functional-546336                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-546336                                                     | functional-546336 | jenkins | v1.35.0 | 08 Apr 25 23:31 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/08 23:31:19
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 23:31:19.766081   30464 out.go:345] Setting OutFile to fd 1 ...
	I0408 23:31:19.766201   30464 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 23:31:19.766204   30464 out.go:358] Setting ErrFile to fd 2...
	I0408 23:31:19.766207   30464 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 23:31:19.766405   30464 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20501-9125/.minikube/bin
	I0408 23:31:19.766918   30464 out.go:352] Setting JSON to false
	I0408 23:31:19.767778   30464 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4425,"bootTime":1744150655,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 23:31:19.767895   30464 start.go:139] virtualization: kvm guest
	I0408 23:31:19.769983   30464 out.go:177] * [functional-546336] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 23:31:19.771193   30464 out.go:177]   - MINIKUBE_LOCATION=20501
	I0408 23:31:19.771220   30464 notify.go:220] Checking for updates...
	I0408 23:31:19.773275   30464 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 23:31:19.774289   30464 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20501-9125/kubeconfig
	I0408 23:31:19.775184   30464 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20501-9125/.minikube
	I0408 23:31:19.776145   30464 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 23:31:19.777178   30464 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 23:31:19.778482   30464 config.go:182] Loaded profile config "functional-546336": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 23:31:19.778572   30464 driver.go:404] Setting default libvirt URI to qemu:///system
	I0408 23:31:19.778998   30464 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 23:31:19.779048   30464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 23:31:19.794516   30464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40081
	I0408 23:31:19.794948   30464 main.go:141] libmachine: () Calling .GetVersion
	I0408 23:31:19.795405   30464 main.go:141] libmachine: Using API Version  1
	I0408 23:31:19.795443   30464 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 23:31:19.795822   30464 main.go:141] libmachine: () Calling .GetMachineName
	I0408 23:31:19.796031   30464 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 23:31:19.827172   30464 out.go:177] * Using the kvm2 driver based on existing profile
	I0408 23:31:19.828121   30464 start.go:297] selected driver: kvm2
	I0408 23:31:19.828128   30464 start.go:901] validating driver "kvm2" against &{Name:functional-546336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-546336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 23:31:19.828217   30464 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 23:31:19.828535   30464 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 23:31:19.828594   30464 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20501-9125/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 23:31:19.842735   30464 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0408 23:31:19.843394   30464 start_flags.go:975] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 23:31:19.843427   30464 cni.go:84] Creating CNI manager for ""
	I0408 23:31:19.843473   30464 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 23:31:19.843525   30464 start.go:340] cluster config:
	{Name:functional-546336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-546336 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSi
ze:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 23:31:19.843626   30464 iso.go:125] acquiring lock: {Name:mk618477bad490b102618c53c9c8c6b34f33ce81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 23:31:19.845420   30464 out.go:177] * Starting "functional-546336" primary control-plane node in "functional-546336" cluster
	I0408 23:31:19.846306   30464 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0408 23:31:19.846338   30464 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20501-9125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0408 23:31:19.846342   30464 cache.go:56] Caching tarball of preloaded images
	I0408 23:31:19.846427   30464 preload.go:172] Found /home/jenkins/minikube-integration/20501-9125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 23:31:19.846433   30464 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0408 23:31:19.846511   30464 profile.go:143] Saving config to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/config.json ...
	I0408 23:31:19.846676   30464 start.go:360] acquireMachinesLock for functional-546336: {Name:mke7be7b51cfddf557a39ecf6493fff6a1168ec9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 23:31:19.846705   30464 start.go:364] duration metric: took 21.051µs to acquireMachinesLock for "functional-546336"
	I0408 23:31:19.846714   30464 start.go:96] Skipping create...Using existing machine configuration
	I0408 23:31:19.846719   30464 fix.go:54] fixHost starting: 
	I0408 23:31:19.846970   30464 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 23:31:19.847002   30464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 23:31:19.860385   30464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43429
	I0408 23:31:19.860731   30464 main.go:141] libmachine: () Calling .GetVersion
	I0408 23:31:19.861108   30464 main.go:141] libmachine: Using API Version  1
	I0408 23:31:19.861123   30464 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 23:31:19.861427   30464 main.go:141] libmachine: () Calling .GetMachineName
	I0408 23:31:19.861571   30464 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 23:31:19.861708   30464 main.go:141] libmachine: (functional-546336) Calling .GetState
	I0408 23:31:19.863086   30464 fix.go:112] recreateIfNeeded on functional-546336: state=Running err=<nil>
	W0408 23:31:19.863098   30464 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 23:31:19.864411   30464 out.go:177] * Updating the running kvm2 "functional-546336" VM ...
	I0408 23:31:19.865293   30464 machine.go:93] provisionDockerMachine start ...
	I0408 23:31:19.865304   30464 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 23:31:19.865488   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 23:31:19.867664   30464 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 23:31:19.868010   30464 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-09 00:23:48 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 23:31:19.868028   30464 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 23:31:19.868134   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 23:31:19.868291   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 23:31:19.868392   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 23:31:19.868549   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 23:31:19.868736   30464 main.go:141] libmachine: Using SSH client type: native
	I0408 23:31:19.868935   30464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0408 23:31:19.868949   30464 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 23:31:19.980120   30464 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-546336
	
	I0408 23:31:19.980135   30464 main.go:141] libmachine: (functional-546336) Calling .GetMachineName
	I0408 23:31:19.980343   30464 buildroot.go:166] provisioning hostname "functional-546336"
	I0408 23:31:19.980359   30464 main.go:141] libmachine: (functional-546336) Calling .GetMachineName
	I0408 23:31:19.980519   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 23:31:19.983065   30464 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 23:31:19.983336   30464 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-09 00:23:48 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 23:31:19.983361   30464 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 23:31:19.983573   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 23:31:19.983751   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 23:31:19.983902   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 23:31:19.984051   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 23:31:19.984177   30464 main.go:141] libmachine: Using SSH client type: native
	I0408 23:31:19.984364   30464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0408 23:31:19.984370   30464 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-546336 && echo "functional-546336" | sudo tee /etc/hostname
	I0408 23:31:20.108031   30464 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-546336
	
	I0408 23:31:20.108049   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 23:31:20.110596   30464 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 23:31:20.110884   30464 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-09 00:23:48 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 23:31:20.110899   30464 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 23:31:20.111050   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 23:31:20.111205   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 23:31:20.111372   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 23:31:20.111496   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 23:31:20.111632   30464 main.go:141] libmachine: Using SSH client type: native
	I0408 23:31:20.111825   30464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0408 23:31:20.111836   30464 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-546336' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-546336/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-546336' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 23:31:20.224196   30464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 23:31:20.224227   30464 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20501-9125/.minikube CaCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20501-9125/.minikube}
	I0408 23:31:20.224262   30464 buildroot.go:174] setting up certificates
	I0408 23:31:20.224269   30464 provision.go:84] configureAuth start
	I0408 23:31:20.224276   30464 main.go:141] libmachine: (functional-546336) Calling .GetMachineName
	I0408 23:31:20.224499   30464 main.go:141] libmachine: (functional-546336) Calling .GetIP
	I0408 23:31:20.226863   30464 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 23:31:20.227174   30464 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-09 00:23:48 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 23:31:20.227187   30464 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 23:31:20.227309   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 23:31:20.229306   30464 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 23:31:20.229653   30464 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-09 00:23:48 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 23:31:20.229678   30464 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 23:31:20.229778   30464 provision.go:143] copyHostCerts
	I0408 23:31:20.229819   30464 exec_runner.go:144] found /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem, removing ...
	I0408 23:31:20.229843   30464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem
	I0408 23:31:20.229903   30464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem (1082 bytes)
	I0408 23:31:20.230027   30464 exec_runner.go:144] found /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem, removing ...
	I0408 23:31:20.230031   30464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem
	I0408 23:31:20.230054   30464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem (1123 bytes)
	I0408 23:31:20.230112   30464 exec_runner.go:144] found /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem, removing ...
	I0408 23:31:20.230115   30464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem
	I0408 23:31:20.230133   30464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem (1675 bytes)
	I0408 23:31:20.230183   30464 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem org=jenkins.functional-546336 san=[127.0.0.1 192.168.39.234 functional-546336 localhost minikube]
	I0408 23:31:20.446323   30464 provision.go:177] copyRemoteCerts
	I0408 23:31:20.446370   30464 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 23:31:20.446391   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 23:31:20.449021   30464 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 23:31:20.449341   30464 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-09 00:23:48 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 23:31:20.449353   30464 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 23:31:20.449499   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 23:31:20.449689   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 23:31:20.449829   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 23:31:20.449967   30464 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/functional-546336/id_rsa Username:docker}
	I0408 23:31:20.533282   30464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 23:31:20.555326   30464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0408 23:31:20.576076   30464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 23:31:20.597613   30464 provision.go:87] duration metric: took 373.332672ms to configureAuth
	I0408 23:31:20.597630   30464 buildroot.go:189] setting minikube options for container-runtime
	I0408 23:31:20.597787   30464 config.go:182] Loaded profile config "functional-546336": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 23:31:20.597858   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 23:31:20.600291   30464 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 23:31:20.600622   30464 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-09 00:23:48 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 23:31:20.600644   30464 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 23:31:20.600790   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 23:31:20.600990   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 23:31:20.601105   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 23:31:20.601247   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 23:31:20.601357   30464 main.go:141] libmachine: Using SSH client type: native
	I0408 23:31:20.601546   30464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0408 23:31:20.601562   30464 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 23:31:20.915394   30464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 23:31:20.915422   30464 machine.go:96] duration metric: took 1.050109405s to provisionDockerMachine
	I0408 23:31:20.915434   30464 start.go:293] postStartSetup for "functional-546336" (driver="kvm2")
	I0408 23:31:20.915444   30464 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 23:31:20.915472   30464 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 23:31:20.915764   30464 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 23:31:20.915791   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 23:31:20.918333   30464 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 23:31:20.918629   30464 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-09 00:23:48 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 23:31:20.918644   30464 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 23:31:20.918766   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 23:31:20.918969   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 23:31:20.919112   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 23:31:20.919233   30464 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/functional-546336/id_rsa Username:docker}
	I0408 23:31:21.005181   30464 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 23:31:21.008967   30464 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 23:31:21.008984   30464 filesync.go:126] Scanning /home/jenkins/minikube-integration/20501-9125/.minikube/addons for local assets ...
	I0408 23:31:21.009035   30464 filesync.go:126] Scanning /home/jenkins/minikube-integration/20501-9125/.minikube/files for local assets ...
	I0408 23:31:21.009101   30464 filesync.go:149] local asset: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem -> 163142.pem in /etc/ssl/certs
	I0408 23:31:21.009180   30464 filesync.go:149] local asset: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/test/nested/copy/16314/hosts -> hosts in /etc/test/nested/copy/16314
	I0408 23:31:21.009211   30464 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/16314
	I0408 23:31:21.017892   30464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem --> /etc/ssl/certs/163142.pem (1708 bytes)
	I0408 23:31:21.039700   30464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/test/nested/copy/16314/hosts --> /etc/test/nested/copy/16314/hosts (40 bytes)
	I0408 23:31:21.060623   30464 start.go:296] duration metric: took 145.162487ms for postStartSetup
	I0408 23:31:21.060659   30464 fix.go:56] duration metric: took 1.21393779s for fixHost
	I0408 23:31:21.060681   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 23:31:21.063375   30464 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 23:31:21.063682   30464 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-09 00:23:48 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 23:31:21.063712   30464 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 23:31:21.063904   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 23:31:21.064075   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 23:31:21.064313   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 23:31:21.064458   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 23:31:21.064640   30464 main.go:141] libmachine: Using SSH client type: native
	I0408 23:31:21.064831   30464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0408 23:31:21.064836   30464 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 23:31:21.176190   30464 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744155081.103990209
	
	I0408 23:31:21.176203   30464 fix.go:216] guest clock: 1744155081.103990209
	I0408 23:31:21.176210   30464 fix.go:229] Guest: 2025-04-08 23:31:21.103990209 +0000 UTC Remote: 2025-04-08 23:31:21.0606618 +0000 UTC m=+1.332309207 (delta=43.328409ms)
	I0408 23:31:21.176255   30464 fix.go:200] guest clock delta is within tolerance: 43.328409ms
	I0408 23:31:21.176261   30464 start.go:83] releasing machines lock for "functional-546336", held for 1.329550241s
	I0408 23:31:21.176289   30464 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 23:31:21.176550   30464 main.go:141] libmachine: (functional-546336) Calling .GetIP
	I0408 23:31:21.179032   30464 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 23:31:21.179272   30464 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-09 00:23:48 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 23:31:21.179294   30464 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 23:31:21.179401   30464 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 23:31:21.179885   30464 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 23:31:21.180047   30464 main.go:141] libmachine: (functional-546336) Calling .DriverName
	I0408 23:31:21.180137   30464 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 23:31:21.180167   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 23:31:21.180235   30464 ssh_runner.go:195] Run: cat /version.json
	I0408 23:31:21.180252   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHHostname
	I0408 23:31:21.182848   30464 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 23:31:21.182933   30464 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 23:31:21.183159   30464 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-09 00:23:48 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 23:31:21.183177   30464 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 23:31:21.183286   30464 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-09 00:23:48 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 23:31:21.183302   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 23:31:21.183298   30464 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 23:31:21.183455   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 23:31:21.183479   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHPort
	I0408 23:31:21.183609   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHKeyPath
	I0408 23:31:21.183615   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 23:31:21.183751   30464 main.go:141] libmachine: (functional-546336) Calling .GetSSHUsername
	I0408 23:31:21.183756   30464 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/functional-546336/id_rsa Username:docker}
	I0408 23:31:21.183843   30464 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/functional-546336/id_rsa Username:docker}
	I0408 23:31:21.292494   30464 ssh_runner.go:195] Run: systemctl --version
	I0408 23:31:21.297931   30464 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 23:31:21.446523   30464 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 23:31:21.467211   30464 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 23:31:21.467260   30464 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 23:31:21.513104   30464 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0408 23:31:21.513120   30464 start.go:495] detecting cgroup driver to use...
	I0408 23:31:21.513210   30464 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 23:31:21.550749   30464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 23:31:21.582189   30464 docker.go:217] disabling cri-docker service (if available) ...
	I0408 23:31:21.582255   30464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 23:31:21.609188   30464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 23:31:21.657268   30464 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 23:31:21.859739   30464 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 23:31:21.996991   30464 docker.go:233] disabling docker service ...
	I0408 23:31:21.997037   30464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 23:31:22.019508   30464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 23:31:22.032031   30464 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 23:31:22.170395   30464 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 23:31:22.301488   30464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 23:31:22.314846   30464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 23:31:22.332023   30464 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0408 23:31:22.332065   30464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 23:31:22.341354   30464 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 23:31:22.341415   30464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 23:31:22.350868   30464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 23:31:22.360357   30464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 23:31:22.369514   30464 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 23:31:22.378977   30464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 23:31:22.388103   30464 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 23:31:22.402781   30464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 23:31:22.418859   30464 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 23:31:22.429689   30464 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 23:31:22.438546   30464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:31:22.572739   30464 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 23:32:52.886937   30464 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.314171921s)
	I0408 23:32:52.886968   30464 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 23:32:52.887010   30464 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 23:32:52.892099   30464 start.go:563] Will wait 60s for crictl version
	I0408 23:32:52.892135   30464 ssh_runner.go:195] Run: which crictl
	I0408 23:32:52.895574   30464 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 23:32:52.930694   30464 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 23:32:52.930744   30464 ssh_runner.go:195] Run: crio --version
	I0408 23:32:52.957096   30464 ssh_runner.go:195] Run: crio --version
	I0408 23:32:52.984080   30464 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0408 23:32:52.985223   30464 main.go:141] libmachine: (functional-546336) Calling .GetIP
	I0408 23:32:52.988218   30464 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 23:32:52.988571   30464 main.go:141] libmachine: (functional-546336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:a0:b8", ip: ""} in network mk-functional-546336: {Iface:virbr1 ExpiryTime:2025-04-09 00:23:48 +0000 UTC Type:0 Mac:52:54:00:8a:a0:b8 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-546336 Clientid:01:52:54:00:8a:a0:b8}
	I0408 23:32:52.988606   30464 main.go:141] libmachine: (functional-546336) DBG | domain functional-546336 has defined IP address 192.168.39.234 and MAC address 52:54:00:8a:a0:b8 in network mk-functional-546336
	I0408 23:32:52.988822   30464 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 23:32:52.993910   30464 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0408 23:32:52.994891   30464 kubeadm.go:883] updating cluster {Name:functional-546336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-5
46336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 23:32:52.994998   30464 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0408 23:32:52.995042   30464 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 23:32:53.040204   30464 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 23:32:53.040215   30464 crio.go:433] Images already preloaded, skipping extraction
	I0408 23:32:53.040258   30464 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 23:32:53.071663   30464 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 23:32:53.071673   30464 cache_images.go:84] Images are preloaded, skipping loading
	I0408 23:32:53.071679   30464 kubeadm.go:934] updating node { 192.168.39.234 8441 v1.32.2 crio true true} ...
	I0408 23:32:53.071778   30464 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-546336 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.234
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:functional-546336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 23:32:53.071837   30464 ssh_runner.go:195] Run: crio config
	I0408 23:32:53.119778   30464 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0408 23:32:53.119796   30464 cni.go:84] Creating CNI manager for ""
	I0408 23:32:53.119806   30464 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 23:32:53.119820   30464 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 23:32:53.119845   30464 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.234 APIServerPort:8441 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-546336 NodeName:functional-546336 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.234"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.234 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigO
pts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 23:32:53.119976   30464 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.234
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-546336"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.234"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.234"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 23:32:53.120038   30464 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0408 23:32:53.129993   30464 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 23:32:53.130040   30464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 23:32:53.138896   30464 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0408 23:32:53.153867   30464 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 23:32:53.169091   30464 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0408 23:32:53.183424   30464 ssh_runner.go:195] Run: grep 192.168.39.234	control-plane.minikube.internal$ /etc/hosts
	I0408 23:32:53.186779   30464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:32:53.301042   30464 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 23:32:53.315467   30464 certs.go:68] Setting up /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336 for IP: 192.168.39.234
	I0408 23:32:53.315480   30464 certs.go:194] generating shared ca certs ...
	I0408 23:32:53.315493   30464 certs.go:226] acquiring lock for ca certs: {Name:mk0d455aae85017ac942481bbc1202ccedea144f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:32:53.315746   30464 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.key
	I0408 23:32:53.315795   30464 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.key
	I0408 23:32:53.315813   30464 certs.go:256] generating profile certs ...
	I0408 23:32:53.315983   30464 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/client.key
	I0408 23:32:53.316033   30464 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/apiserver.key.848fae18
	I0408 23:32:53.316072   30464 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/proxy-client.key
	I0408 23:32:53.316214   30464 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314.pem (1338 bytes)
	W0408 23:32:53.316238   30464 certs.go:480] ignoring /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314_empty.pem, impossibly tiny 0 bytes
	I0408 23:32:53.316244   30464 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 23:32:53.316265   30464 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem (1082 bytes)
	I0408 23:32:53.316283   30464 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem (1123 bytes)
	I0408 23:32:53.316312   30464 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem (1675 bytes)
	I0408 23:32:53.316360   30464 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem (1708 bytes)
	I0408 23:32:53.317427   30464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 23:32:53.341767   30464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 23:32:53.362745   30464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 23:32:53.383506   30464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0408 23:32:53.406081   30464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0408 23:32:53.427988   30464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0408 23:32:53.448329   30464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 23:32:53.468679   30464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/functional-546336/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 23:32:53.489849   30464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 23:32:53.517715   30464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314.pem --> /usr/share/ca-certificates/16314.pem (1338 bytes)
	I0408 23:32:53.593457   30464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem --> /usr/share/ca-certificates/163142.pem (1708 bytes)
	I0408 23:32:53.642770   30464 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0408 23:32:53.668173   30464 ssh_runner.go:195] Run: openssl version
	I0408 23:32:53.677715   30464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 23:32:53.687597   30464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 23:32:53.691629   30464 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 22:46 /usr/share/ca-certificates/minikubeCA.pem
	I0408 23:32:53.691664   30464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 23:32:53.697100   30464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 23:32:53.705216   30464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16314.pem && ln -fs /usr/share/ca-certificates/16314.pem /etc/ssl/certs/16314.pem"
	I0408 23:32:53.714996   30464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16314.pem
	I0408 23:32:53.718843   30464 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 22:53 /usr/share/ca-certificates/16314.pem
	I0408 23:32:53.718886   30464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16314.pem
	I0408 23:32:53.724116   30464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16314.pem /etc/ssl/certs/51391683.0"
	I0408 23:32:53.732175   30464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163142.pem && ln -fs /usr/share/ca-certificates/163142.pem /etc/ssl/certs/163142.pem"
	I0408 23:32:53.741661   30464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163142.pem
	I0408 23:32:53.745467   30464 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 22:53 /usr/share/ca-certificates/163142.pem
	I0408 23:32:53.745506   30464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163142.pem
	I0408 23:32:53.750465   30464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163142.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 23:32:53.758915   30464 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 23:32:53.763041   30464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 23:32:53.768156   30464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 23:32:53.773203   30464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 23:32:53.778260   30464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 23:32:53.783216   30464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 23:32:53.788184   30464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 23:32:53.793145   30464 kubeadm.go:392] StartCluster: {Name:functional-546336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-5463
36 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 23:32:53.793227   30464 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 23:32:53.793265   30464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 23:32:53.828434   30464 cri.go:89] found id: "025133e197ca76ba8b17954c1ecfe0a00d4e89c5367e733ef5177810b2218d0b"
	I0408 23:32:53.828447   30464 cri.go:89] found id: "1d548bdb16c0b53538b9fedb82ea9a70267ea61b01cdefe0a01ad1ce451cbe9e"
	I0408 23:32:53.828451   30464 cri.go:89] found id: "1661ef4bbc8d9bd87ace82439a6240cc02dae018dd90cc00e736742a658aafc6"
	I0408 23:32:53.828454   30464 cri.go:89] found id: "a49d33808640c366a7d5443834c9e5e5f2248bf901344434fbe3ea7884636c8a"
	I0408 23:32:53.828457   30464 cri.go:89] found id: ""
	I0408 23:32:53.828504   30464 ssh_runner.go:195] Run: sudo runc list -f json
	I0408 23:32:53.860773   30464 cri.go:116] JSON = [{"ociVersion":"1.2.0","id":"025133e197ca76ba8b17954c1ecfe0a00d4e89c5367e733ef5177810b2218d0b","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/025133e197ca76ba8b17954c1ecfe0a00d4e89c5367e733ef5177810b2218d0b/userdata","rootfs":"/var/lib/containers/storage/overlay/f8123da0610aa9fbd39982c34b86759f71e29d0c9ef5189b436190194cee75a4/merged","created":"2025-04-08T23:31:21.67419932Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e68be80f","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e68be80f\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\
":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"025133e197ca76ba8b17954c1ecfe0a00d4e89c5367e733ef5177810b2218d0b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-04-08T23:31:21.586332392Z","io.kubernetes.cri-o.Image":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.16-0","io.kubernetes.cri-o.ImageRef":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-functional-546336\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"e567e58c6d72117dd010767530cff034\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-functional-546336_e567e58c6d72117dd010767530cff034/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/stor
age/overlay/f8123da0610aa9fbd39982c34b86759f71e29d0c9ef5189b436190194cee75a4/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-functional-546336_kube-system_e567e58c6d72117dd010767530cff034_1","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/2c237f9fc4d79261cf633d4a8002700911ed5a34d64c9cbc1c8ab4285656bbe8/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"2c237f9fc4d79261cf633d4a8002700911ed5a34d64c9cbc1c8ab4285656bbe8","io.kubernetes.cri-o.SandboxName":"k8s_etcd-functional-546336_kube-system_e567e58c6d72117dd010767530cff034_1","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/e567e58c6d72117dd010767530cff034/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/
termination-log\",\"host_path\":\"/var/lib/kubelet/pods/e567e58c6d72117dd010767530cff034/containers/etcd/d40f376f\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-functional-546336","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"e567e58c6d72117dd010767530cff034","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.234:2379","kubernetes.io/config.hash":"e567e58c6d72117dd010767530cff034","kubernetes.io/config.seen":"2025-04-08T23:04:45.640873069Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.2.0","id":"1661ef4bbc8d9bd87ace82439a6240cc02dae0
18dd90cc00e736742a658aafc6","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/1661ef4bbc8d9bd87ace82439a6240cc02dae018dd90cc00e736742a658aafc6/userdata","rootfs":"/var/lib/containers/storage/overlay/2df40abf956146e095cfa90f01527a9957fec1057e46ce28a3b0137172759576/merged","created":"2025-04-08T23:29:20.737118553Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"51692d3d","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"20","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"51692d3d\",\"io.kubernetes.container.restartCount\":\"20\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"166
1ef4bbc8d9bd87ace82439a6240cc02dae018dd90cc00e736742a658aafc6","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-04-08T23:29:20.696774663Z","io.kubernetes.cri-o.Image":"b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.32.2","io.kubernetes.cri-o.ImageRef":"b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-functional-546336\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"58759ca9d0ee331612ac601ca4858681\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-functional-546336_58759ca9d0ee331612ac601ca4858681/kube-controller-manager/20.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":20}","io.kubernetes.cri-o.MountPoint":"/var/lib/con
tainers/storage/overlay/2df40abf956146e095cfa90f01527a9957fec1057e46ce28a3b0137172759576/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-functional-546336_kube-system_58759ca9d0ee331612ac601ca4858681_20","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/9f843bd6803dbde0efa8dc6a99a4935999937bf3df878830b5bb410ddd23c0c4/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"9f843bd6803dbde0efa8dc6a99a4935999937bf3df878830b5bb410ddd23c0c4","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-functional-546336_kube-system_58759ca9d0ee331612ac601ca4858681_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/58759ca9d0ee331612ac601ca4858681/etc-hosts\",\"readonly\":false,\"
propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/58759ca9d0ee331612ac601ca4858681/containers/kube-controller-manager/209947b8\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet
-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-functional-546336","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"58759ca9d0ee331612ac601ca4858681","kubernetes.io/config.hash":"58759ca9d0ee331612ac601ca4858681","kubernetes.io/config.seen":"2025-04-08T23:04:45.640877204Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.2.0","id":"1d548bdb16c0b53538b9fedb82ea9a70267ea61b01cdefe0a01ad1ce451cbe9e","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/1d548bdb16c0b53538b9fedb82ea9a70267ea61b01cdefe0a01ad1ce451cbe9e/userdata","rootfs":"/var/lib/containers/storage/overlay/419e5f2129bf1d2e1af9a4e0bdb3ca88719d5723217e51fe3ca5321940a09e0d/merged","created":"2025-04-08T23:31:21.596945197Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"4c5aaea3","io.kubernetes.container.name":"kube-s
cheduler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"4c5aaea3\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1d548bdb16c0b53538b9fedb82ea9a70267ea61b01cdefe0a01ad1ce451cbe9e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-04-08T23:31:21.559940865Z","io.kubernetes.cri-o.Image":"d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.32.2","io.kubernetes.cri-o.ImageRef":"d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d","io.kubernetes.cri-o.Labels":"{\"io.kubern
etes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-functional-546336\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"472e19ec165b8a6bb82a16ee7ed00fbe\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-functional-546336_472e19ec165b8a6bb82a16ee7ed00fbe/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/419e5f2129bf1d2e1af9a4e0bdb3ca88719d5723217e51fe3ca5321940a09e0d/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-functional-546336_kube-system_472e19ec165b8a6bb82a16ee7ed00fbe_1","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f99435be9f410361a6e36c6d3853703b756947d8592e641a3ec5f906f8d1b902/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f99435be9f410361a6e36c6d3853703b756947d8592e641a3ec5f906f8d1b902","
io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-functional-546336_kube-system_472e19ec165b8a6bb82a16ee7ed00fbe_1","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/472e19ec165b8a6bb82a16ee7ed00fbe/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/472e19ec165b8a6bb82a16ee7ed00fbe/containers/kube-scheduler/266ecfca\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-functional-546336","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGra
cePeriod":"30","io.kubernetes.pod.uid":"472e19ec165b8a6bb82a16ee7ed00fbe","kubernetes.io/config.hash":"472e19ec165b8a6bb82a16ee7ed00fbe","kubernetes.io/config.seen":"2025-04-08T23:04:45.640878476Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.2.0","id":"2c237f9fc4d79261cf633d4a8002700911ed5a34d64c9cbc1c8ab4285656bbe8","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/2c237f9fc4d79261cf633d4a8002700911ed5a34d64c9cbc1c8ab4285656bbe8/userdata","rootfs":"/var/lib/containers/storage/overlay/3da1497f60044fda684c302aaafaa9f42190ccc80737587c689af456696059e3/merged","created":"2025-04-08T23:31:21.46602768Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.39.234:2379\",\"kubernetes.io/config.seen\":\"2025-04-08T23:04:45.640873069Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"e
567e58c6d72117dd010767530cff034\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pode567e58c6d72117dd010767530cff034","io.kubernetes.cri-o.ContainerID":"2c237f9fc4d79261cf633d4a8002700911ed5a34d64c9cbc1c8ab4285656bbe8","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-functional-546336_kube-system_e567e58c6d72117dd010767530cff034_1","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2025-04-08T23:31:21.37427492Z","io.kubernetes.cri-o.HostName":"functional-546336","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/2c237f9fc4d79261cf633d4a8002700911ed5a34d64c9cbc1c8ab4285656bbe8/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"etcd-functional-546336","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-functional-546336\",\"ti
er\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\",\"component\":\"etcd\",\"io.kubernetes.pod.uid\":\"e567e58c6d72117dd010767530cff034\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-functional-546336_e567e58c6d72117dd010767530cff034/2c237f9fc4d79261cf633d4a8002700911ed5a34d64c9cbc1c8ab4285656bbe8.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-functional-546336\",\"uid\":\"e567e58c6d72117dd010767530cff034\",\"namespace\":\"kube-system\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3da1497f60044fda684c302aaafaa9f42190ccc80737587c689af456696059e3/merged","io.kubernetes.cri-o.Name":"k8s_etcd-functional-546336_kube-system_e567e58c6d72117dd010767530cff034_1","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}",
"io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/2c237f9fc4d79261cf633d4a8002700911ed5a34d64c9cbc1c8ab4285656bbe8/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"2c237f9fc4d79261cf633d4a8002700911ed5a34d64c9cbc1c8ab4285656bbe8","io.kubernetes.cri-o.SandboxName":"k8s_etcd-functional-546336_kube-system_e567e58c6d72117dd010767530cff034_1","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/2c237f9fc4d79261cf633d4a8002700911ed5a34d64c9cbc1c8ab4285656bbe8/userdata/shm","io.kubernetes.pod.name":"etcd-functional-546336","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"e567e58c6d72117dd010767530cff034","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.234:2379","kubernetes.io/config.hash":"e567e58c6d72117dd010767530cff034","kubernetes.
io/config.seen":"2025-04-08T23:04:45.640873069Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.2.0","id":"36f0529cdb69bcf08100444d18f9a51f6157e6c1751cac37a9cdf6a3019673a3","pid":12535,"status":"running","bundle":"/run/containers/storage/overlay-containers/36f0529cdb69bcf08100444d18f9a51f6157e6c1751cac37a9cdf6a3019673a3/userdata","rootfs":"/var/lib/containers/storage/overlay/a43df2b64d10574fd93a6c57bd2e7839b85e0ac1109abdf58e7b5c180f7a2922/merged","created":"2025-04-08T23:32:53.538656433Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"472e19ec165b8a6bb82a16ee7ed00fbe\",\"kubernetes.io/config.seen\":\"2025-04-08T23:04:45.640878476Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod472e19ec165b8a6bb82a16ee7ed00fbe","io.kubernetes.cri-o.ContainerID":"36f0529cdb69bcf08100444d18f9a51f
6157e6c1751cac37a9cdf6a3019673a3","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-functional-546336_kube-system_472e19ec165b8a6bb82a16ee7ed00fbe_2","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2025-04-08T23:32:53.478266527Z","io.kubernetes.cri-o.HostName":"functional-546336","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/36f0529cdb69bcf08100444d18f9a51f6157e6c1751cac37a9cdf6a3019673a3/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"kube-scheduler-functional-546336","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-functional-546336\",\"tier\":\"control-plane\",\"component\":\"kube-scheduler\",\"io.kubernetes.pod.uid\":\"472e19ec165b8a6bb82a16ee7ed00fbe\"}","io
.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-functional-546336_472e19ec165b8a6bb82a16ee7ed00fbe/36f0529cdb69bcf08100444d18f9a51f6157e6c1751cac37a9cdf6a3019673a3.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-functional-546336\",\"uid\":\"472e19ec165b8a6bb82a16ee7ed00fbe\",\"namespace\":\"kube-system\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a43df2b64d10574fd93a6c57bd2e7839b85e0ac1109abdf58e7b5c180f7a2922/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-functional-546336_kube-system_472e19ec165b8a6bb82a16ee7ed00fbe_2","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var
/run/containers/storage/overlay-containers/36f0529cdb69bcf08100444d18f9a51f6157e6c1751cac37a9cdf6a3019673a3/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"36f0529cdb69bcf08100444d18f9a51f6157e6c1751cac37a9cdf6a3019673a3","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-functional-546336_kube-system_472e19ec165b8a6bb82a16ee7ed00fbe_2","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/36f0529cdb69bcf08100444d18f9a51f6157e6c1751cac37a9cdf6a3019673a3/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-functional-546336","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"472e19ec165b8a6bb82a16ee7ed00fbe","kubernetes.io/config.hash":"472e19ec165b8a6bb82a16ee7ed00fbe","kubernetes.io/config.seen":"2025-04-08T23:04:45.640878476Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.2.0","id":"398b8f93901f01b1fc6e27e8f2609c29
6a47c0cce3f992a731f500de88f30df1","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/398b8f93901f01b1fc6e27e8f2609c296a47c0cce3f992a731f500de88f30df1/userdata","rootfs":"/var/lib/containers/storage/overlay/8dda8aca85f82fbff02972c69dbf7fda347489aec6670e70d20952da9d46f161/merged","created":"2025-04-08T23:04:46.159495028Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"6b6b4c8ccb3c7c6b209837aca5454ea1\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.39.234:8441\",\"kubernetes.io/config.seen\":\"2025-04-08T23:04:45.640875893Z\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod6b6b4c8ccb3c7c6b209837aca5454ea1","io.kubernetes.cri-o.ContainerID":"398b8f93901f01b1fc6e27e8f2609c296a47c0cce3f992a731f500de88f30df1","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-func
tional-546336_kube-system_6b6b4c8ccb3c7c6b209837aca5454ea1_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2025-04-08T23:04:46.105967804Z","io.kubernetes.cri-o.HostName":"functional-546336","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/398b8f93901f01b1fc6e27e8f2609c296a47c0cce3f992a731f500de88f30df1/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"kube-apiserver-functional-546336","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.name\":\"kube-apiserver-functional-546336\",\"tier\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.pod.uid\":\"6b6b4c8ccb3c7c6b209837aca5454ea1\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-functional-546336_6b6b4c8ccb3
c7c6b209837aca5454ea1/398b8f93901f01b1fc6e27e8f2609c296a47c0cce3f992a731f500de88f30df1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-functional-546336\",\"uid\":\"6b6b4c8ccb3c7c6b209837aca5454ea1\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/8dda8aca85f82fbff02972c69dbf7fda347489aec6670e70d20952da9d46f161/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-functional-546336_kube-system_6b6b4c8ccb3c7c6b209837aca5454ea1_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":256,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/398b8f93901f01b1fc6e27e8f2609c296a47c0cce3f992a731f500de88f30df1/user
data/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"398b8f93901f01b1fc6e27e8f2609c296a47c0cce3f992a731f500de88f30df1","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-functional-546336_kube-system_6b6b4c8ccb3c7c6b209837aca5454ea1_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/398b8f93901f01b1fc6e27e8f2609c296a47c0cce3f992a731f500de88f30df1/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-functional-546336","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"6b6b4c8ccb3c7c6b209837aca5454ea1","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.234:8441","kubernetes.io/config.hash":"6b6b4c8ccb3c7c6b209837aca5454ea1","kubernetes.io/config.seen":"2025-04-08T23:04:45.640875893Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.2.0","id":"5b963c4ce012b8ab9cf858b83eb0dbd5954dda2e453faa8c1a19c099
b5902498","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/5b963c4ce012b8ab9cf858b83eb0dbd5954dda2e453faa8c1a19c099b5902498/userdata","rootfs":"/var/lib/containers/storage/overlay/d3157da5d98678ae2b2d1d0be407c2b29e23cd504f1a3a130da87ebe7e8ac9a5/merged","created":"2025-04-08T23:04:46.168820852Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2025-04-08T23:04:45.640873069Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"e567e58c6d72117dd010767530cff034\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.39.234:2379\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pode567e58c6d72117dd010767530cff034","io.kubernetes.cri-o.ContainerID":"5b963c4ce012b8ab9cf858b83eb0dbd5954dda2e453faa8c1a19c099b5902498","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-functional-546336_kube-system_e567e58c6d72117dd01076753
0cff034_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2025-04-08T23:04:46.095654617Z","io.kubernetes.cri-o.HostName":"functional-546336","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/5b963c4ce012b8ab9cf858b83eb0dbd5954dda2e453faa8c1a19c099b5902498/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"etcd-functional-546336","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-functional-546336\",\"tier\":\"control-plane\",\"component\":\"etcd\",\"io.kubernetes.pod.uid\":\"e567e58c6d72117dd010767530cff034\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-functional-546336_e567e58c6d72117dd010767530cff034/5b963c4ce012b8ab9cf858b83eb0dbd5954dda2e453faa8c1a19c099b5902498.log"
,"io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-functional-546336\",\"uid\":\"e567e58c6d72117dd010767530cff034\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d3157da5d98678ae2b2d1d0be407c2b29e23cd504f1a3a130da87ebe7e8ac9a5/merged","io.kubernetes.cri-o.Name":"k8s_etcd-functional-546336_kube-system_e567e58c6d72117dd010767530cff034_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/5b963c4ce012b8ab9cf858b83eb0dbd5954dda2e453faa8c1a19c099b5902498/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"5b963c4ce012b8ab9cf8
58b83eb0dbd5954dda2e453faa8c1a19c099b5902498","io.kubernetes.cri-o.SandboxName":"k8s_etcd-functional-546336_kube-system_e567e58c6d72117dd010767530cff034_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/5b963c4ce012b8ab9cf858b83eb0dbd5954dda2e453faa8c1a19c099b5902498/userdata/shm","io.kubernetes.pod.name":"etcd-functional-546336","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"e567e58c6d72117dd010767530cff034","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.234:2379","kubernetes.io/config.hash":"e567e58c6d72117dd010767530cff034","kubernetes.io/config.seen":"2025-04-08T23:04:45.640873069Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.2.0","id":"9f843bd6803dbde0efa8dc6a99a4935999937bf3df878830b5bb410ddd23c0c4","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/9f843bd6803dbde0efa8dc6a99a4935999937bf3df878830
b5bb410ddd23c0c4/userdata","rootfs":"/var/lib/containers/storage/overlay/f15e89b7c2c07e25972dfb177f5493279cc39180fa87af92339d7f1a785f1f0b/merged","created":"2025-04-08T23:04:46.184864235Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"58759ca9d0ee331612ac601ca4858681\",\"kubernetes.io/config.seen\":\"2025-04-08T23:04:45.640877204Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod58759ca9d0ee331612ac601ca4858681","io.kubernetes.cri-o.ContainerID":"9f843bd6803dbde0efa8dc6a99a4935999937bf3df878830b5bb410ddd23c0c4","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-functional-546336_kube-system_58759ca9d0ee331612ac601ca4858681_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2025-04-08T23:04:46.108488198Z","io.kubernetes.cri-o.HostName":"functional-546336","io.kubernetes
.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/9f843bd6803dbde0efa8dc6a99a4935999937bf3df878830b5bb410ddd23c0c4/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"kube-controller-manager-functional-546336","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-controller-manager\",\"io.kubernetes.pod.uid\":\"58759ca9d0ee331612ac601ca4858681\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-functional-546336\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-functional-546336_58759ca9d0ee331612ac601ca4858681/9f843bd6803dbde0efa8dc6a99a4935999937bf3df878830b5bb410ddd23c0c4.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-functional-546336\",\"uid\":\"58759ca9d0
ee331612ac601ca4858681\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f15e89b7c2c07e25972dfb177f5493279cc39180fa87af92339d7f1a785f1f0b/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-functional-546336_kube-system_58759ca9d0ee331612ac601ca4858681_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":204,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/9f843bd6803dbde0efa8dc6a99a4935999937bf3df878830b5bb410ddd23c0c4/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"9f843bd6803dbde0efa8dc6a99a4935999937bf3df878830b5bb410ddd23c0c4","io.kubernetes.cri-o.Sand
boxName":"k8s_kube-controller-manager-functional-546336_kube-system_58759ca9d0ee331612ac601ca4858681_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/9f843bd6803dbde0efa8dc6a99a4935999937bf3df878830b5bb410ddd23c0c4/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-functional-546336","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"58759ca9d0ee331612ac601ca4858681","kubernetes.io/config.hash":"58759ca9d0ee331612ac601ca4858681","kubernetes.io/config.seen":"2025-04-08T23:04:45.640877204Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.2.0","id":"a023d99b22bb6d0843d0597572613366f1e51e1101643ccac8c3cfd678789e25","pid":12526,"status":"running","bundle":"/run/containers/storage/overlay-containers/a023d99b22bb6d0843d0597572613366f1e51e1101643ccac8c3cfd678789e25/userdata","rootfs":"/var/lib/containers/storage/overlay/3bc6cd0031f9f8f242d5a0acef4d5c858a027
ea5bf956688d0dff759bee6cb0c/merged","created":"2025-04-08T23:32:53.533436969Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2025-04-08T23:04:45.640873069Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"e567e58c6d72117dd010767530cff034\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.39.234:2379\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pode567e58c6d72117dd010767530cff034","io.kubernetes.cri-o.ContainerID":"a023d99b22bb6d0843d0597572613366f1e51e1101643ccac8c3cfd678789e25","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-functional-546336_kube-system_e567e58c6d72117dd010767530cff034_2","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2025-04-08T23:32:53.479861707Z","io.kubernetes.cri-o.HostName":"functional-546336","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"
/var/run/containers/storage/overlay-containers/a023d99b22bb6d0843d0597572613366f1e51e1101643ccac8c3cfd678789e25/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"etcd-functional-546336","io.kubernetes.cri-o.Labels":"{\"component\":\"etcd\",\"io.kubernetes.pod.uid\":\"e567e58c6d72117dd010767530cff034\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-functional-546336\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-functional-546336_e567e58c6d72117dd010767530cff034/a023d99b22bb6d0843d0597572613366f1e51e1101643ccac8c3cfd678789e25.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-functional-546336\",\"uid\":\"e567e58c6d72117dd010767530cff034\",\"namespace\":\"kube-system\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3bc6cd0031f9f8f24
2d5a0acef4d5c858a027ea5bf956688d0dff759bee6cb0c/merged","io.kubernetes.cri-o.Name":"k8s_etcd-functional-546336_kube-system_e567e58c6d72117dd010767530cff034_2","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/a023d99b22bb6d0843d0597572613366f1e51e1101643ccac8c3cfd678789e25/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"a023d99b22bb6d0843d0597572613366f1e51e1101643ccac8c3cfd678789e25","io.kubernetes.cri-o.SandboxName":"k8s_etcd-functional-546336_kube-system_e567e58c6d72117dd010767530cff034_2","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.Shm
Path":"/var/run/containers/storage/overlay-containers/a023d99b22bb6d0843d0597572613366f1e51e1101643ccac8c3cfd678789e25/userdata/shm","io.kubernetes.pod.name":"etcd-functional-546336","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"e567e58c6d72117dd010767530cff034","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.234:2379","kubernetes.io/config.hash":"e567e58c6d72117dd010767530cff034","kubernetes.io/config.seen":"2025-04-08T23:04:45.640873069Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.2.0","id":"a49d33808640c366a7d5443834c9e5e5f2248bf901344434fbe3ea7884636c8a","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/a49d33808640c366a7d5443834c9e5e5f2248bf901344434fbe3ea7884636c8a/userdata","rootfs":"/var/lib/containers/storage/overlay/93e18f11b384ed4dd7154a22aaca192d352a729722a9d4f9a072fff87ce6cdd2/merged","created":"2025-04-08T23:29:07.740529542Z","annotations":{"io.container.manager":"cri-o"
,"io.kubernetes.container.hash":"7745040f","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"20","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7745040f\",\"io.kubernetes.container.restartCount\":\"20\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a49d33808640c366a7d5443834c9e5e5f2248bf901344434fbe3ea7884636c8a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-04-08T23:29:07.69617496Z","io.kubernetes.cri-o.Image":"85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.32.2","io.kubernetes.cri-o.ImageRef":"85b7a174738baecbc53029b791
3cd430a2060e0cbdb5f56c7957d32ff7f241ef","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-functional-546336\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"6b6b4c8ccb3c7c6b209837aca5454ea1\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-functional-546336_6b6b4c8ccb3c7c6b209837aca5454ea1/kube-apiserver/20.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":20}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/93e18f11b384ed4dd7154a22aaca192d352a729722a9d4f9a072fff87ce6cdd2/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-functional-546336_kube-system_6b6b4c8ccb3c7c6b209837aca5454ea1_20","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/398b8f93901f01b1fc6e27e8f2609c296a47c0cce3f992a731f500de88f30df1/userdata/resolv.conf","io.kubernetes.
cri-o.SandboxID":"398b8f93901f01b1fc6e27e8f2609c296a47c0cce3f992a731f500de88f30df1","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-functional-546336_kube-system_6b6b4c8ccb3c7c6b209837aca5454ea1_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/6b6b4c8ccb3c7c6b209837aca5454ea1/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/6b6b4c8ccb3c7c6b209837aca5454ea1/containers/kube-apiserver/de50baae\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certi
ficates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-functional-546336","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"6b6b4c8ccb3c7c6b209837aca5454ea1","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.234:8441","kubernetes.io/config.hash":"6b6b4c8ccb3c7c6b209837aca5454ea1","kubernetes.io/config.seen":"2025-04-08T23:04:45.640875893Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.2.0","id":"e65cb166ffb25f73ba53ac87615e78e580fb44e0bc2ef831971e1a6e116c4b69","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/e65cb166ffb25f73ba53ac87615e78e580fb44e0bc2ef831971e1a6e116c4b69/userdata","rootfs":"/var/lib/containers/storage/overlay/5bdf32734edf3202bbd88bf6
6429e5d10b2855db80e4a55627c8cac4cd2d6d61/merged","created":"2025-04-08T23:04:46.175214668Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"472e19ec165b8a6bb82a16ee7ed00fbe\",\"kubernetes.io/config.seen\":\"2025-04-08T23:04:45.640878476Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod472e19ec165b8a6bb82a16ee7ed00fbe","io.kubernetes.cri-o.ContainerID":"e65cb166ffb25f73ba53ac87615e78e580fb44e0bc2ef831971e1a6e116c4b69","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-functional-546336_kube-system_472e19ec165b8a6bb82a16ee7ed00fbe_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2025-04-08T23:04:46.104022634Z","io.kubernetes.cri-o.HostName":"functional-546336","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/e65cb
166ffb25f73ba53ac87615e78e580fb44e0bc2ef831971e1a6e116c4b69/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"kube-scheduler-functional-546336","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-scheduler\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"472e19ec165b8a6bb82a16ee7ed00fbe\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-functional-546336\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-functional-546336_472e19ec165b8a6bb82a16ee7ed00fbe/e65cb166ffb25f73ba53ac87615e78e580fb44e0bc2ef831971e1a6e116c4b69.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-functional-546336\",\"uid\":\"472e19ec165b8a6bb82a16ee7ed00fbe\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/5bdf32734edf3202bbd88bf66429e5d10
b2855db80e4a55627c8cac4cd2d6d61/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-functional-546336_kube-system_472e19ec165b8a6bb82a16ee7ed00fbe_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e65cb166ffb25f73ba53ac87615e78e580fb44e0bc2ef831971e1a6e116c4b69/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"e65cb166ffb25f73ba53ac87615e78e580fb44e0bc2ef831971e1a6e116c4b69","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-functional-546336_kube-system_472e19ec165b8a6bb82a16ee7ed00fbe_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o
.ShmPath":"/var/run/containers/storage/overlay-containers/e65cb166ffb25f73ba53ac87615e78e580fb44e0bc2ef831971e1a6e116c4b69/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-functional-546336","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"472e19ec165b8a6bb82a16ee7ed00fbe","kubernetes.io/config.hash":"472e19ec165b8a6bb82a16ee7ed00fbe","kubernetes.io/config.seen":"2025-04-08T23:04:45.640878476Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.2.0","id":"f99435be9f410361a6e36c6d3853703b756947d8592e641a3ec5f906f8d1b902","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/f99435be9f410361a6e36c6d3853703b756947d8592e641a3ec5f906f8d1b902/userdata","rootfs":"/var/lib/containers/storage/overlay/47a2cc56d016f3c736b58ef20f5654b346fdbe5045a4ce0fe027863efa067a4b/merged","created":"2025-04-08T23:31:21.470628894Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD",
"io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2025-04-08T23:04:45.640878476Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"472e19ec165b8a6bb82a16ee7ed00fbe\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod472e19ec165b8a6bb82a16ee7ed00fbe","io.kubernetes.cri-o.ContainerID":"f99435be9f410361a6e36c6d3853703b756947d8592e641a3ec5f906f8d1b902","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-functional-546336_kube-system_472e19ec165b8a6bb82a16ee7ed00fbe_1","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2025-04-08T23:31:21.363284719Z","io.kubernetes.cri-o.HostName":"functional-546336","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/f99435be9f410361a6e36c6d3853703b756947d8592e641a3ec5f906f8d1b902/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kub
ernetes.cri-o.KubeName":"kube-scheduler-functional-546336","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-functional-546336\",\"tier\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\",\"component\":\"kube-scheduler\",\"io.kubernetes.pod.uid\":\"472e19ec165b8a6bb82a16ee7ed00fbe\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-functional-546336_472e19ec165b8a6bb82a16ee7ed00fbe/f99435be9f410361a6e36c6d3853703b756947d8592e641a3ec5f906f8d1b902.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-functional-546336\",\"uid\":\"472e19ec165b8a6bb82a16ee7ed00fbe\",\"namespace\":\"kube-system\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/47a2cc56d016f3c736b58ef20f5654b346fdbe5045a4ce0fe027863efa067a4b/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-functional-546336_kube-system_472e19ec165b8a6bb82a16ee7ed00fbe_1","io.kubernetes.cri-o.Namespace":"kube
-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f99435be9f410361a6e36c6d3853703b756947d8592e641a3ec5f906f8d1b902/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"f99435be9f410361a6e36c6d3853703b756947d8592e641a3ec5f906f8d1b902","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-functional-546336_kube-system_472e19ec165b8a6bb82a16ee7ed00fbe_1","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/f99435be9f410361a6e36c6d3853703b756947d8592e641a3ec5f906f8d1b902/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-functional-5
46336","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"472e19ec165b8a6bb82a16ee7ed00fbe","kubernetes.io/config.hash":"472e19ec165b8a6bb82a16ee7ed00fbe","kubernetes.io/config.seen":"2025-04-08T23:04:45.640878476Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"}]
	I0408 23:32:53.861236   30464 cri.go:126] list returned 12 containers
	I0408 23:32:53.861243   30464 cri.go:129] container: {ID:025133e197ca76ba8b17954c1ecfe0a00d4e89c5367e733ef5177810b2218d0b Status:stopped}
	I0408 23:32:53.861253   30464 cri.go:135] skipping {025133e197ca76ba8b17954c1ecfe0a00d4e89c5367e733ef5177810b2218d0b stopped}: state = "stopped", want "paused"
	I0408 23:32:53.861258   30464 cri.go:129] container: {ID:1661ef4bbc8d9bd87ace82439a6240cc02dae018dd90cc00e736742a658aafc6 Status:stopped}
	I0408 23:32:53.861262   30464 cri.go:135] skipping {1661ef4bbc8d9bd87ace82439a6240cc02dae018dd90cc00e736742a658aafc6 stopped}: state = "stopped", want "paused"
	I0408 23:32:53.861264   30464 cri.go:129] container: {ID:1d548bdb16c0b53538b9fedb82ea9a70267ea61b01cdefe0a01ad1ce451cbe9e Status:stopped}
	I0408 23:32:53.861266   30464 cri.go:135] skipping {1d548bdb16c0b53538b9fedb82ea9a70267ea61b01cdefe0a01ad1ce451cbe9e stopped}: state = "stopped", want "paused"
	I0408 23:32:53.861269   30464 cri.go:129] container: {ID:2c237f9fc4d79261cf633d4a8002700911ed5a34d64c9cbc1c8ab4285656bbe8 Status:stopped}
	I0408 23:32:53.861274   30464 cri.go:131] skipping 2c237f9fc4d79261cf633d4a8002700911ed5a34d64c9cbc1c8ab4285656bbe8 - not in ps
	I0408 23:32:53.861279   30464 cri.go:129] container: {ID:36f0529cdb69bcf08100444d18f9a51f6157e6c1751cac37a9cdf6a3019673a3 Status:running}
	I0408 23:32:53.861284   30464 cri.go:131] skipping 36f0529cdb69bcf08100444d18f9a51f6157e6c1751cac37a9cdf6a3019673a3 - not in ps
	I0408 23:32:53.861286   30464 cri.go:129] container: {ID:398b8f93901f01b1fc6e27e8f2609c296a47c0cce3f992a731f500de88f30df1 Status:stopped}
	I0408 23:32:53.861289   30464 cri.go:131] skipping 398b8f93901f01b1fc6e27e8f2609c296a47c0cce3f992a731f500de88f30df1 - not in ps
	I0408 23:32:53.861291   30464 cri.go:129] container: {ID:5b963c4ce012b8ab9cf858b83eb0dbd5954dda2e453faa8c1a19c099b5902498 Status:stopped}
	I0408 23:32:53.861295   30464 cri.go:131] skipping 5b963c4ce012b8ab9cf858b83eb0dbd5954dda2e453faa8c1a19c099b5902498 - not in ps
	I0408 23:32:53.861296   30464 cri.go:129] container: {ID:9f843bd6803dbde0efa8dc6a99a4935999937bf3df878830b5bb410ddd23c0c4 Status:stopped}
	I0408 23:32:53.861298   30464 cri.go:131] skipping 9f843bd6803dbde0efa8dc6a99a4935999937bf3df878830b5bb410ddd23c0c4 - not in ps
	I0408 23:32:53.861300   30464 cri.go:129] container: {ID:a023d99b22bb6d0843d0597572613366f1e51e1101643ccac8c3cfd678789e25 Status:running}
	I0408 23:32:53.861302   30464 cri.go:131] skipping a023d99b22bb6d0843d0597572613366f1e51e1101643ccac8c3cfd678789e25 - not in ps
	I0408 23:32:53.861303   30464 cri.go:129] container: {ID:a49d33808640c366a7d5443834c9e5e5f2248bf901344434fbe3ea7884636c8a Status:stopped}
	I0408 23:32:53.861307   30464 cri.go:135] skipping {a49d33808640c366a7d5443834c9e5e5f2248bf901344434fbe3ea7884636c8a stopped}: state = "stopped", want "paused"
	I0408 23:32:53.861310   30464 cri.go:129] container: {ID:e65cb166ffb25f73ba53ac87615e78e580fb44e0bc2ef831971e1a6e116c4b69 Status:stopped}
	I0408 23:32:53.861314   30464 cri.go:131] skipping e65cb166ffb25f73ba53ac87615e78e580fb44e0bc2ef831971e1a6e116c4b69 - not in ps
	I0408 23:32:53.861316   30464 cri.go:129] container: {ID:f99435be9f410361a6e36c6d3853703b756947d8592e641a3ec5f906f8d1b902 Status:stopped}
	I0408 23:32:53.861319   30464 cri.go:131] skipping f99435be9f410361a6e36c6d3853703b756947d8592e641a3ec5f906f8d1b902 - not in ps
	I0408 23:32:53.861356   30464 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0408 23:32:53.870369   30464 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0408 23:32:53.870378   30464 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0408 23:32:53.870418   30464 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 23:32:53.879195   30464 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 23:32:53.879647   30464 kubeconfig.go:125] found "functional-546336" server: "https://192.168.39.234:8441"
	I0408 23:32:53.880921   30464 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 23:32:53.889237   30464 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.39.234"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I0408 23:32:53.889252   30464 kubeadm.go:1160] stopping kube-system containers ...
	I0408 23:32:53.889262   30464 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0408 23:32:53.889303   30464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 23:32:53.921737   30464 cri.go:89] found id: "025133e197ca76ba8b17954c1ecfe0a00d4e89c5367e733ef5177810b2218d0b"
	I0408 23:32:53.921757   30464 cri.go:89] found id: "1d548bdb16c0b53538b9fedb82ea9a70267ea61b01cdefe0a01ad1ce451cbe9e"
	I0408 23:32:53.921760   30464 cri.go:89] found id: "1661ef4bbc8d9bd87ace82439a6240cc02dae018dd90cc00e736742a658aafc6"
	I0408 23:32:53.921762   30464 cri.go:89] found id: "a49d33808640c366a7d5443834c9e5e5f2248bf901344434fbe3ea7884636c8a"
	I0408 23:32:53.921763   30464 cri.go:89] found id: ""
	I0408 23:32:53.921768   30464 cri.go:252] Stopping containers: [025133e197ca76ba8b17954c1ecfe0a00d4e89c5367e733ef5177810b2218d0b 1d548bdb16c0b53538b9fedb82ea9a70267ea61b01cdefe0a01ad1ce451cbe9e 1661ef4bbc8d9bd87ace82439a6240cc02dae018dd90cc00e736742a658aafc6 a49d33808640c366a7d5443834c9e5e5f2248bf901344434fbe3ea7884636c8a]
	I0408 23:32:53.921811   30464 ssh_runner.go:195] Run: which crictl
	I0408 23:32:53.925312   30464 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 025133e197ca76ba8b17954c1ecfe0a00d4e89c5367e733ef5177810b2218d0b 1d548bdb16c0b53538b9fedb82ea9a70267ea61b01cdefe0a01ad1ce451cbe9e 1661ef4bbc8d9bd87ace82439a6240cc02dae018dd90cc00e736742a658aafc6 a49d33808640c366a7d5443834c9e5e5f2248bf901344434fbe3ea7884636c8a
	I0408 23:32:53.981020   30464 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 23:32:54.024606   30464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 23:32:54.034165   30464 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Apr  8 23:04 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Apr  8 23:04 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5711 Apr  8 23:04 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Apr  8 23:04 /etc/kubernetes/scheduler.conf
	
	I0408 23:32:54.034218   30464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0408 23:32:54.042673   30464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0408 23:32:54.051188   30464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0408 23:32:54.059688   30464 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0408 23:32:54.059730   30464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 23:32:54.068295   30464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0408 23:32:54.076556   30464 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0408 23:32:54.076600   30464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 23:32:54.085141   30464 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 23:32:54.094321   30464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 23:32:54.142756   30464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 23:32:54.914905   30464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 23:32:55.110083   30464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 23:32:55.179354   30464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 23:32:55.296288   30464 api_server.go:52] waiting for apiserver process to appear ...
	I0408 23:32:55.296343   30464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 23:32:55.796655   30464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 23:32:56.296868   30464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 23:32:56.310439   30464 api_server.go:72] duration metric: took 1.014152959s to wait for apiserver process to appear ...
	I0408 23:32:56.310451   30464 api_server.go:88] waiting for apiserver healthz status ...
	I0408 23:32:56.310471   30464 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8441/healthz ...
	I0408 23:32:58.190641   30464 api_server.go:279] https://192.168.39.234:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 23:32:58.190660   30464 api_server.go:103] status: https://192.168.39.234:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 23:32:58.190673   30464 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8441/healthz ...
	I0408 23:32:58.245929   30464 api_server.go:279] https://192.168.39.234:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 23:32:58.245954   30464 api_server.go:103] status: https://192.168.39.234:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 23:32:58.311211   30464 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8441/healthz ...
	I0408 23:32:58.316726   30464 api_server.go:279] https://192.168.39.234:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 23:32:58.316741   30464 api_server.go:103] status: https://192.168.39.234:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 23:32:58.811442   30464 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8441/healthz ...
	I0408 23:32:58.815395   30464 api_server.go:279] https://192.168.39.234:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 23:32:58.815406   30464 api_server.go:103] status: https://192.168.39.234:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 23:32:59.311016   30464 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8441/healthz ...
	I0408 23:32:59.317440   30464 api_server.go:279] https://192.168.39.234:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 23:32:59.317452   30464 api_server.go:103] status: https://192.168.39.234:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 23:32:59.811262   30464 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8441/healthz ...
	I0408 23:32:59.815937   30464 api_server.go:279] https://192.168.39.234:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 23:32:59.815951   30464 api_server.go:103] status: https://192.168.39.234:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 23:33:00.311295   30464 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8441/healthz ...
	I0408 23:33:00.317131   30464 api_server.go:279] https://192.168.39.234:8441/healthz returned 200:
	ok
	I0408 23:33:00.323082   30464 api_server.go:141] control plane version: v1.32.2
	I0408 23:33:00.323095   30464 api_server.go:131] duration metric: took 4.012639818s to wait for apiserver health ...
	I0408 23:33:00.323101   30464 cni.go:84] Creating CNI manager for ""
	I0408 23:33:00.323106   30464 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 23:33:00.324956   30464 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 23:33:00.326108   30464 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 23:33:00.336207   30464 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 23:33:00.356482   30464 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 23:33:00.359190   30464 system_pods.go:59] 0 kube-system pods found
	I0408 23:33:00.359226   30464 retry.go:31] will retry after 231.836803ms: only 0 pod(s) have shown up
	I0408 23:33:00.594234   30464 system_pods.go:59] 0 kube-system pods found
	I0408 23:33:00.594249   30464 retry.go:31] will retry after 250.128315ms: only 0 pod(s) have shown up
	I0408 23:33:00.846947   30464 system_pods.go:59] 0 kube-system pods found
	I0408 23:33:00.846962   30464 retry.go:31] will retry after 458.880719ms: only 0 pod(s) have shown up
	I0408 23:33:01.310099   30464 system_pods.go:59] 1 kube-system pods found
	I0408 23:33:01.310121   30464 system_pods.go:61] "kube-controller-manager-functional-546336" [d33e0b5c-7aa0-4a55-82d5-81e0737d4c6b] Pending
	I0408 23:33:01.310134   30464 retry.go:31] will retry after 539.495059ms: only 1 pod(s) have shown up
	I0408 23:33:01.852588   30464 system_pods.go:59] 1 kube-system pods found
	I0408 23:33:01.852601   30464 system_pods.go:61] "kube-controller-manager-functional-546336" [d33e0b5c-7aa0-4a55-82d5-81e0737d4c6b] Pending
	I0408 23:33:01.852612   30464 retry.go:31] will retry after 742.38645ms: only 1 pod(s) have shown up
	I0408 23:33:02.597848   30464 system_pods.go:59] 1 kube-system pods found
	I0408 23:33:02.597863   30464 system_pods.go:61] "kube-controller-manager-functional-546336" [d33e0b5c-7aa0-4a55-82d5-81e0737d4c6b] Pending
	I0408 23:33:02.597876   30464 retry.go:31] will retry after 904.230225ms: only 1 pod(s) have shown up
	I0408 23:33:03.506026   30464 system_pods.go:59] 2 kube-system pods found
	I0408 23:33:03.506038   30464 system_pods.go:61] "etcd-functional-546336" [a605c500-8324-49b4-a77b-035b734de32b] Pending
	I0408 23:33:03.506042   30464 system_pods.go:61] "kube-controller-manager-functional-546336" [d33e0b5c-7aa0-4a55-82d5-81e0737d4c6b] Pending
	I0408 23:33:03.506047   30464 system_pods.go:74] duration metric: took 3.149555407s to wait for pod list to return data ...
	I0408 23:33:03.506059   30464 node_conditions.go:102] verifying NodePressure condition ...
	I0408 23:33:03.508520   30464 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 23:33:03.508536   30464 node_conditions.go:123] node cpu capacity is 2
	I0408 23:33:03.508551   30464 node_conditions.go:105] duration metric: took 2.488935ms to run NodePressure ...
	I0408 23:33:03.508564   30464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 23:33:03.670714   30464 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0408 23:33:03.673706   30464 kubeadm.go:739] kubelet initialised
	I0408 23:33:03.673715   30464 kubeadm.go:740] duration metric: took 2.988973ms waiting for restarted kubelet to initialise ...
	I0408 23:33:03.673724   30464 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 23:33:03.675604   30464 pod_ready.go:79] waiting up to 4m0s for pod "etcd-functional-546336" in "kube-system" namespace to be "Ready" ...
	I0408 23:33:05.680755   30464 pod_ready.go:103] pod "etcd-functional-546336" in "kube-system" namespace has status "Ready":"False"
	I0408 23:33:08.180359   30464 pod_ready.go:103] pod "etcd-functional-546336" in "kube-system" namespace has status "Ready":"False"
	I0408 23:33:10.181388   30464 pod_ready.go:103] pod "etcd-functional-546336" in "kube-system" namespace has status "Ready":"False"
	I0408 23:33:12.681317   30464 pod_ready.go:103] pod "etcd-functional-546336" in "kube-system" namespace has status "Ready":"False"
	I0408 23:33:13.179730   30464 pod_ready.go:93] pod "etcd-functional-546336" in "kube-system" namespace has status "Ready":"True"
	I0408 23:33:13.179741   30464 pod_ready.go:82] duration metric: took 9.504127925s for pod "etcd-functional-546336" in "kube-system" namespace to be "Ready" ...
	I0408 23:33:13.179748   30464 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-functional-546336" in "kube-system" namespace to be "Ready" ...
	I0408 23:33:13.684774   30464 pod_ready.go:93] pod "kube-apiserver-functional-546336" in "kube-system" namespace has status "Ready":"True"
	I0408 23:33:13.684786   30464 pod_ready.go:82] duration metric: took 505.032772ms for pod "kube-apiserver-functional-546336" in "kube-system" namespace to be "Ready" ...
	I0408 23:33:13.684801   30464 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-functional-546336" in "kube-system" namespace to be "Ready" ...
	I0408 23:33:15.689767   30464 pod_ready.go:103] pod "kube-controller-manager-functional-546336" in "kube-system" namespace has status "Ready":"False"
	I0408 23:33:17.690510   30464 pod_ready.go:103] pod "kube-controller-manager-functional-546336" in "kube-system" namespace has status "Ready":"False"
	I0408 23:33:19.690685   30464 pod_ready.go:103] pod "kube-controller-manager-functional-546336" in "kube-system" namespace has status "Ready":"False"
	I0408 23:33:22.190234   30464 pod_ready.go:103] pod "kube-controller-manager-functional-546336" in "kube-system" namespace has status "Ready":"False"
	I0408 23:33:24.690469   30464 pod_ready.go:103] pod "kube-controller-manager-functional-546336" in "kube-system" namespace has status "Ready":"False"
	I0408 23:33:27.190240   30464 pod_ready.go:103] pod "kube-controller-manager-functional-546336" in "kube-system" namespace has status "Ready":"False"
	I0408 23:33:29.690091   30464 pod_ready.go:103] pod "kube-controller-manager-functional-546336" in "kube-system" namespace has status "Ready":"False"
	
	
	==> CRI-O <==
	Apr 08 23:33:34 functional-546336 crio[12378]: time="2025-04-08 23:33:34.096943321Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744155214096917826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157169,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b3373e80-5060-40e2-ade4-d94e1e1ec5b5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 23:33:34 functional-546336 crio[12378]: time="2025-04-08 23:33:34.097429037Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0209213f-595b-494f-9518-dbe9da3ced35 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 23:33:34 functional-546336 crio[12378]: time="2025-04-08 23:33:34.097494527Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0209213f-595b-494f-9518-dbe9da3ced35 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 23:33:34 functional-546336 crio[12378]: time="2025-04-08 23:33:34.097683223Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39ba6cb48251fd240ea97fe44485b68ac9be0e2d6efe73fb56c64f3e86cec4f5,PodSandboxId:446e399173e1b79c2f2fff37a98513a95ee0e366a696ea48b126c167125e0da6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744155175723927935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-546336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee4fa84ba00c21f54c9966a4adb4684c,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4cb3b5dc26bb31dd5c7642a3b929bc48601a83dc0567ed0f3ba1ab49d63cb58,PodSandboxId:36f0529cdb69bcf08100444d18f9a51f6157e6c1751cac37a9cdf6a3019673a3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744155175613174762,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-546336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 472e19ec165b8a6bb82a16ee7ed00fbe,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3756e4d240c39a3d1577292cd55608f513cf0d8f6ddf4869723eb10be9de2d0,PodSandboxId:a023d99b22bb6d0843d0597572613366f1e51e1101643ccac8c3cfd678789e25,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744155175593504614,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-546336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e567e58c6d72117dd010767530cff034,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:025133e197ca76ba8b17954c1ecfe0a00d4e89c5367e733ef5177810b2218d0b,PodSandboxId:2c237f9fc4d79261cf633d4a8002700911ed5a34d64c9cbc1c8ab4285656bbe8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744155081586332392,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-546336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e567e58c6d72117dd010767530cff034,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d548bdb16c0b53538b9fedb82ea9a70267ea61b01cdefe0a01ad1ce451cbe9e,PodSandboxId:f99435be9f410361a6e36c6d3853703b756947d8592e641a3ec5f906f8d1b902,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744155081559940865,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-546336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 472e19ec165b8a6bb82a16ee7ed00fbe,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kube
rnetes.pod.terminationGracePeriod: 30,},},&Container{Id:1661ef4bbc8d9bd87ace82439a6240cc02dae018dd90cc00e736742a658aafc6,PodSandboxId:9f843bd6803dbde0efa8dc6a99a4935999937bf3df878830b5bb410ddd23c0c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:20,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744154960696774663,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-546336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58759ca9d0ee331612ac601ca4858681,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 20,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a49d33808640c366a7d5443834c9e5e5f2248bf901344434fbe3ea7884636c8a,PodSandboxId:398b8f93901f01b1fc6e27e8f2609c296a47c0cce3f992a731f500de88f30df1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:20,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744154947696174960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-546336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b6b4c8ccb3c7c6b209837aca5454ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 20,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0209213f-595b-494f-9518-dbe9da3ced35 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 23:33:34 functional-546336 crio[12378]: time="2025-04-08 23:33:34.127408954Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b43fce25-f68b-400c-9a07-5d425f043299 name=/runtime.v1.RuntimeService/Version
	Apr 08 23:33:34 functional-546336 crio[12378]: time="2025-04-08 23:33:34.127493236Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b43fce25-f68b-400c-9a07-5d425f043299 name=/runtime.v1.RuntimeService/Version
	Apr 08 23:33:34 functional-546336 crio[12378]: time="2025-04-08 23:33:34.128327691Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a1cc9e18-75b8-46b9-9195-37c8dfa93fe8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 23:33:34 functional-546336 crio[12378]: time="2025-04-08 23:33:34.128808895Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744155214128787394,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157169,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1cc9e18-75b8-46b9-9195-37c8dfa93fe8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 23:33:34 functional-546336 crio[12378]: time="2025-04-08 23:33:34.129249008Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f788c89b-d676-46ec-8a71-f13c594985ff name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 23:33:34 functional-546336 crio[12378]: time="2025-04-08 23:33:34.129307328Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f788c89b-d676-46ec-8a71-f13c594985ff name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 23:33:34 functional-546336 crio[12378]: time="2025-04-08 23:33:34.129463592Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39ba6cb48251fd240ea97fe44485b68ac9be0e2d6efe73fb56c64f3e86cec4f5,PodSandboxId:446e399173e1b79c2f2fff37a98513a95ee0e366a696ea48b126c167125e0da6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744155175723927935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-546336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee4fa84ba00c21f54c9966a4adb4684c,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4cb3b5dc26bb31dd5c7642a3b929bc48601a83dc0567ed0f3ba1ab49d63cb58,PodSandboxId:36f0529cdb69bcf08100444d18f9a51f6157e6c1751cac37a9cdf6a3019673a3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744155175613174762,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-546336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 472e19ec165b8a6bb82a16ee7ed00fbe,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3756e4d240c39a3d1577292cd55608f513cf0d8f6ddf4869723eb10be9de2d0,PodSandboxId:a023d99b22bb6d0843d0597572613366f1e51e1101643ccac8c3cfd678789e25,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744155175593504614,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-546336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e567e58c6d72117dd010767530cff034,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:025133e197ca76ba8b17954c1ecfe0a00d4e89c5367e733ef5177810b2218d0b,PodSandboxId:2c237f9fc4d79261cf633d4a8002700911ed5a34d64c9cbc1c8ab4285656bbe8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744155081586332392,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-546336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e567e58c6d72117dd010767530cff034,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d548bdb16c0b53538b9fedb82ea9a70267ea61b01cdefe0a01ad1ce451cbe9e,PodSandboxId:f99435be9f410361a6e36c6d3853703b756947d8592e641a3ec5f906f8d1b902,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744155081559940865,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-546336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 472e19ec165b8a6bb82a16ee7ed00fbe,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kube
rnetes.pod.terminationGracePeriod: 30,},},&Container{Id:1661ef4bbc8d9bd87ace82439a6240cc02dae018dd90cc00e736742a658aafc6,PodSandboxId:9f843bd6803dbde0efa8dc6a99a4935999937bf3df878830b5bb410ddd23c0c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:20,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744154960696774663,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-546336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58759ca9d0ee331612ac601ca4858681,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 20,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a49d33808640c366a7d5443834c9e5e5f2248bf901344434fbe3ea7884636c8a,PodSandboxId:398b8f93901f01b1fc6e27e8f2609c296a47c0cce3f992a731f500de88f30df1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:20,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744154947696174960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-546336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b6b4c8ccb3c7c6b209837aca5454ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 20,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f788c89b-d676-46ec-8a71-f13c594985ff name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 23:33:34 functional-546336 crio[12378]: time="2025-04-08 23:33:34.163339913Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=83fa3baf-c4f0-4825-9c17-fdb6129fac63 name=/runtime.v1.RuntimeService/Version
	Apr 08 23:33:34 functional-546336 crio[12378]: time="2025-04-08 23:33:34.163438351Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=83fa3baf-c4f0-4825-9c17-fdb6129fac63 name=/runtime.v1.RuntimeService/Version
	Apr 08 23:33:34 functional-546336 crio[12378]: time="2025-04-08 23:33:34.168725528Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d85c5547-4041-4da6-8a55-11df35eb6635 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 23:33:34 functional-546336 crio[12378]: time="2025-04-08 23:33:34.169163472Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744155214169144788,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157169,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d85c5547-4041-4da6-8a55-11df35eb6635 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 23:33:34 functional-546336 crio[12378]: time="2025-04-08 23:33:34.169763965Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=91a3ea29-8387-4684-9ffa-88c0d227c4be name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 23:33:34 functional-546336 crio[12378]: time="2025-04-08 23:33:34.169826998Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=91a3ea29-8387-4684-9ffa-88c0d227c4be name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 23:33:34 functional-546336 crio[12378]: time="2025-04-08 23:33:34.169969596Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39ba6cb48251fd240ea97fe44485b68ac9be0e2d6efe73fb56c64f3e86cec4f5,PodSandboxId:446e399173e1b79c2f2fff37a98513a95ee0e366a696ea48b126c167125e0da6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744155175723927935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-546336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee4fa84ba00c21f54c9966a4adb4684c,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4cb3b5dc26bb31dd5c7642a3b929bc48601a83dc0567ed0f3ba1ab49d63cb58,PodSandboxId:36f0529cdb69bcf08100444d18f9a51f6157e6c1751cac37a9cdf6a3019673a3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744155175613174762,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-546336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 472e19ec165b8a6bb82a16ee7ed00fbe,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3756e4d240c39a3d1577292cd55608f513cf0d8f6ddf4869723eb10be9de2d0,PodSandboxId:a023d99b22bb6d0843d0597572613366f1e51e1101643ccac8c3cfd678789e25,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744155175593504614,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-546336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e567e58c6d72117dd010767530cff034,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:025133e197ca76ba8b17954c1ecfe0a00d4e89c5367e733ef5177810b2218d0b,PodSandboxId:2c237f9fc4d79261cf633d4a8002700911ed5a34d64c9cbc1c8ab4285656bbe8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744155081586332392,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-546336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e567e58c6d72117dd010767530cff034,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d548bdb16c0b53538b9fedb82ea9a70267ea61b01cdefe0a01ad1ce451cbe9e,PodSandboxId:f99435be9f410361a6e36c6d3853703b756947d8592e641a3ec5f906f8d1b902,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744155081559940865,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-546336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 472e19ec165b8a6bb82a16ee7ed00fbe,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kube
rnetes.pod.terminationGracePeriod: 30,},},&Container{Id:1661ef4bbc8d9bd87ace82439a6240cc02dae018dd90cc00e736742a658aafc6,PodSandboxId:9f843bd6803dbde0efa8dc6a99a4935999937bf3df878830b5bb410ddd23c0c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:20,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744154960696774663,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-546336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58759ca9d0ee331612ac601ca4858681,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 20,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a49d33808640c366a7d5443834c9e5e5f2248bf901344434fbe3ea7884636c8a,PodSandboxId:398b8f93901f01b1fc6e27e8f2609c296a47c0cce3f992a731f500de88f30df1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:20,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744154947696174960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-546336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b6b4c8ccb3c7c6b209837aca5454ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 20,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=91a3ea29-8387-4684-9ffa-88c0d227c4be name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 23:33:34 functional-546336 crio[12378]: time="2025-04-08 23:33:34.199522064Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a5c566f7-d3fe-4db8-88e6-ef8802b91cb8 name=/runtime.v1.RuntimeService/Version
	Apr 08 23:33:34 functional-546336 crio[12378]: time="2025-04-08 23:33:34.199627979Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a5c566f7-d3fe-4db8-88e6-ef8802b91cb8 name=/runtime.v1.RuntimeService/Version
	Apr 08 23:33:34 functional-546336 crio[12378]: time="2025-04-08 23:33:34.200745386Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=578cd1fe-81c1-4007-a4a7-b5cdb9e9138a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 23:33:34 functional-546336 crio[12378]: time="2025-04-08 23:33:34.201207056Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744155214201184581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157169,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=578cd1fe-81c1-4007-a4a7-b5cdb9e9138a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 23:33:34 functional-546336 crio[12378]: time="2025-04-08 23:33:34.201651344Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ccfc5405-8aaa-47c1-8ddf-5e64962719e2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 23:33:34 functional-546336 crio[12378]: time="2025-04-08 23:33:34.201718767Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ccfc5405-8aaa-47c1-8ddf-5e64962719e2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 23:33:34 functional-546336 crio[12378]: time="2025-04-08 23:33:34.201874583Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39ba6cb48251fd240ea97fe44485b68ac9be0e2d6efe73fb56c64f3e86cec4f5,PodSandboxId:446e399173e1b79c2f2fff37a98513a95ee0e366a696ea48b126c167125e0da6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744155175723927935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-546336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee4fa84ba00c21f54c9966a4adb4684c,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4cb3b5dc26bb31dd5c7642a3b929bc48601a83dc0567ed0f3ba1ab49d63cb58,PodSandboxId:36f0529cdb69bcf08100444d18f9a51f6157e6c1751cac37a9cdf6a3019673a3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744155175613174762,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-546336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 472e19ec165b8a6bb82a16ee7ed00fbe,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3756e4d240c39a3d1577292cd55608f513cf0d8f6ddf4869723eb10be9de2d0,PodSandboxId:a023d99b22bb6d0843d0597572613366f1e51e1101643ccac8c3cfd678789e25,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744155175593504614,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-546336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e567e58c6d72117dd010767530cff034,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:025133e197ca76ba8b17954c1ecfe0a00d4e89c5367e733ef5177810b2218d0b,PodSandboxId:2c237f9fc4d79261cf633d4a8002700911ed5a34d64c9cbc1c8ab4285656bbe8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744155081586332392,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-546336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e567e58c6d72117dd010767530cff034,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d548bdb16c0b53538b9fedb82ea9a70267ea61b01cdefe0a01ad1ce451cbe9e,PodSandboxId:f99435be9f410361a6e36c6d3853703b756947d8592e641a3ec5f906f8d1b902,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744155081559940865,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-546336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 472e19ec165b8a6bb82a16ee7ed00fbe,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kube
rnetes.pod.terminationGracePeriod: 30,},},&Container{Id:1661ef4bbc8d9bd87ace82439a6240cc02dae018dd90cc00e736742a658aafc6,PodSandboxId:9f843bd6803dbde0efa8dc6a99a4935999937bf3df878830b5bb410ddd23c0c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:20,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744154960696774663,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-546336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58759ca9d0ee331612ac601ca4858681,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 20,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a49d33808640c366a7d5443834c9e5e5f2248bf901344434fbe3ea7884636c8a,PodSandboxId:398b8f93901f01b1fc6e27e8f2609c296a47c0cce3f992a731f500de88f30df1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:20,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744154947696174960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-546336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b6b4c8ccb3c7c6b209837aca5454ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 20,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ccfc5405-8aaa-47c1-8ddf-5e64962719e2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	39ba6cb48251f       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   38 seconds ago      Running             kube-apiserver            0                   446e399173e1b       kube-apiserver-functional-546336
	a4cb3b5dc26bb       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   38 seconds ago      Running             kube-scheduler            2                   36f0529cdb69b       kube-scheduler-functional-546336
	c3756e4d240c3       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   38 seconds ago      Running             etcd                      2                   a023d99b22bb6       etcd-functional-546336
	025133e197ca7       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   2 minutes ago       Exited              etcd                      1                   2c237f9fc4d79       etcd-functional-546336
	1d548bdb16c0b       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   2 minutes ago       Exited              kube-scheduler            1                   f99435be9f410       kube-scheduler-functional-546336
	1661ef4bbc8d9       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   4 minutes ago       Exited              kube-controller-manager   20                  9f843bd6803db       kube-controller-manager-functional-546336
	a49d33808640c       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   4 minutes ago       Exited              kube-apiserver            20                  398b8f93901f0       kube-apiserver-functional-546336
	
	
	==> describe nodes <==
	Name:               functional-546336
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-546336
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Apr 2025 23:32:58 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-546336
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Apr 2025 23:33:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Apr 2025 23:32:58 +0000   Tue, 08 Apr 2025 23:32:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Apr 2025 23:32:58 +0000   Tue, 08 Apr 2025 23:32:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Apr 2025 23:32:58 +0000   Tue, 08 Apr 2025 23:32:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Apr 2025 23:32:58 +0000   Tue, 08 Apr 2025 23:32:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.234
	  Hostname:    functional-546336
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 6ec0abc9da9248ac8d5ec896a528e22f
	  System UUID:                6ec0abc9-da92-48ac-8d5e-c896a528e22f
	  Boot ID:                    97c2e84a-8130-483e-a998-0421d1a75ebe
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-functional-546336                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         32s
	  kube-system                 kube-apiserver-functional-546336             250m (12%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-functional-546336    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-scheduler-functional-546336             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%)  0 (0%)
	  memory             100Mi (2%)  0 (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 39s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s (x8 over 39s)  kubelet  Node functional-546336 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s (x8 over 39s)  kubelet  Node functional-546336 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s (x7 over 39s)  kubelet  Node functional-546336 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  39s                kubelet  Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +5.204756] systemd-fstab-generator[1363]: Ignoring "noauto" option for root device
	[  +0.118972] kauditd_printk_skb: 21 callbacks suppressed
	[ +32.048879] kauditd_printk_skb: 79 callbacks suppressed
	[Apr 8 22:55] systemd-fstab-generator[2630]: Ignoring "noauto" option for root device
	[  +0.233527] systemd-fstab-generator[2717]: Ignoring "noauto" option for root device
	[  +0.248914] systemd-fstab-generator[2739]: Ignoring "noauto" option for root device
	[  +0.207219] systemd-fstab-generator[2794]: Ignoring "noauto" option for root device
	[  +0.302870] systemd-fstab-generator[2827]: Ignoring "noauto" option for root device
	[Apr 8 22:56] systemd-fstab-generator[3065]: Ignoring "noauto" option for root device
	[  +0.075309] kauditd_printk_skb: 182 callbacks suppressed
	[  +2.132222] systemd-fstab-generator[3186]: Ignoring "noauto" option for root device
	[ +22.508995] kauditd_printk_skb: 70 callbacks suppressed
	[Apr 8 23:00] systemd-fstab-generator[8560]: Ignoring "noauto" option for root device
	[Apr 8 23:01] kauditd_printk_skb: 66 callbacks suppressed
	[Apr 8 23:04] systemd-fstab-generator[9464]: Ignoring "noauto" option for root device
	[Apr 8 23:05] kauditd_printk_skb: 48 callbacks suppressed
	[Apr 8 23:31] systemd-fstab-generator[12204]: Ignoring "noauto" option for root device
	[  +0.181866] systemd-fstab-generator[12259]: Ignoring "noauto" option for root device
	[  +0.161310] systemd-fstab-generator[12275]: Ignoring "noauto" option for root device
	[  +0.151156] systemd-fstab-generator[12288]: Ignoring "noauto" option for root device
	[  +0.262450] systemd-fstab-generator[12316]: Ignoring "noauto" option for root device
	[Apr 8 23:32] systemd-fstab-generator[12461]: Ignoring "noauto" option for root device
	[  +0.066961] kauditd_printk_skb: 132 callbacks suppressed
	[  +1.727313] systemd-fstab-generator[12650]: Ignoring "noauto" option for root device
	[  +4.259320] kauditd_printk_skb: 66 callbacks suppressed
	
	
	==> etcd [025133e197ca76ba8b17954c1ecfe0a00d4e89c5367e733ef5177810b2218d0b] <==
	{"level":"info","ts":"2025-04-08T23:31:21.764950Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.234:2379"]}
	{"level":"info","ts":"2025-04-08T23:31:21.765182Z","caller":"embed/etcd.go:311","msg":"starting an etcd server","etcd-version":"3.5.16","git-sha":"f20bbad","go-version":"go1.22.7","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":false,"name":"functional-546336","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.234:2380"],"listen-peer-urls":["https://192.168.39.234:2380"],"advertise-client-urls":["https://192.168.39.234:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.234:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"functional-546336=https://192.168.39.234
:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2025-04-08T23:31:21.769747Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"3.975051ms"}
	{"level":"info","ts":"2025-04-08T23:31:21.776927Z","caller":"etcdserver/raft.go:505","msg":"starting local member","local-member-id":"de9917ec5c740094","cluster-id":"6193f7f4ee516b71"}
	{"level":"info","ts":"2025-04-08T23:31:21.777038Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 switched to configuration voters=()"}
	{"level":"info","ts":"2025-04-08T23:31:21.777697Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 became follower at term 0"}
	{"level":"info","ts":"2025-04-08T23:31:21.789289Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft de9917ec5c740094 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"}
	{"level":"info","ts":"2025-04-08T23:31:21.789356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 became follower at term 1"}
	{"level":"info","ts":"2025-04-08T23:31:21.789491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 switched to configuration voters=(16039877851787559060)"}
	{"level":"warn","ts":"2025-04-08T23:31:21.797537Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2025-04-08T23:31:21.799266Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":1}
	{"level":"info","ts":"2025-04-08T23:31:21.802859Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2025-04-08T23:31:21.808268Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"de9917ec5c740094","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-04-08T23:31:21.811041Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-08T23:31:21.816214Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-08T23:31:21.828888Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.234:2380"}
	{"level":"info","ts":"2025-04-08T23:31:21.828914Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.234:2380"}
	{"level":"info","ts":"2025-04-08T23:31:21.829219Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"de9917ec5c740094","initial-advertise-peer-urls":["https://192.168.39.234:2380"],"listen-peer-urls":["https://192.168.39.234:2380"],"advertise-client-urls":["https://192.168.39.234:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.234:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-08T23:31:21.829243Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-08T23:31:21.829848Z","caller":"etcdserver/server.go:757","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"de9917ec5c740094","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-04-08T23:31:21.829989Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-04-08T23:31:21.830030Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-04-08T23:31:21.830050Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-04-08T23:31:21.837003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 switched to configuration voters=(16039877851787559060)"}
	{"level":"info","ts":"2025-04-08T23:31:21.837166Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6193f7f4ee516b71","local-member-id":"de9917ec5c740094","added-peer-id":"de9917ec5c740094","added-peer-peer-urls":["https://192.168.39.234:2380"]}
	
	
	==> etcd [c3756e4d240c39a3d1577292cd55608f513cf0d8f6ddf4869723eb10be9de2d0] <==
	{"level":"info","ts":"2025-04-08T23:32:55.893071Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-04-08T23:32:55.893219Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.234:2380"}
	{"level":"info","ts":"2025-04-08T23:32:55.893249Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.234:2380"}
	{"level":"info","ts":"2025-04-08T23:32:55.893839Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 switched to configuration voters=(16039877851787559060)"}
	{"level":"info","ts":"2025-04-08T23:32:55.893900Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6193f7f4ee516b71","local-member-id":"de9917ec5c740094","added-peer-id":"de9917ec5c740094","added-peer-peer-urls":["https://192.168.39.234:2380"]}
	{"level":"info","ts":"2025-04-08T23:32:57.076803Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 is starting a new election at term 1"}
	{"level":"info","ts":"2025-04-08T23:32:57.076938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-04-08T23:32:57.076973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 received MsgPreVoteResp from de9917ec5c740094 at term 1"}
	{"level":"info","ts":"2025-04-08T23:32:57.077002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 became candidate at term 2"}
	{"level":"info","ts":"2025-04-08T23:32:57.077019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 received MsgVoteResp from de9917ec5c740094 at term 2"}
	{"level":"info","ts":"2025-04-08T23:32:57.077039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 became leader at term 2"}
	{"level":"info","ts":"2025-04-08T23:32:57.077058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: de9917ec5c740094 elected leader de9917ec5c740094 at term 2"}
	{"level":"info","ts":"2025-04-08T23:32:57.081662Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-08T23:32:57.081933Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"de9917ec5c740094","local-member-attributes":"{Name:functional-546336 ClientURLs:[https://192.168.39.234:2379]}","request-path":"/0/members/de9917ec5c740094/attributes","cluster-id":"6193f7f4ee516b71","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-08T23:32:57.082228Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-08T23:32:57.082591Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-08T23:32:57.082619Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6193f7f4ee516b71","local-member-id":"de9917ec5c740094","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-08T23:32:57.082777Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-08T23:32:57.082811Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-08T23:32:57.083247Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-08T23:32:57.083372Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-08T23:32:57.083400Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-08T23:32:57.083729Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-08T23:32:57.083869Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.234:2379"}
	{"level":"info","ts":"2025-04-08T23:32:57.084292Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 23:33:34 up 39 min,  0 users,  load average: 0.13, 0.06, 0.03
	Linux functional-546336 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [39ba6cb48251fd240ea97fe44485b68ac9be0e2d6efe73fb56c64f3e86cec4f5] <==
	I0408 23:32:58.193690       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0408 23:32:58.196327       1 shared_informer.go:320] Caches are synced for configmaps
	I0408 23:32:58.199184       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0408 23:32:58.199515       1 aggregator.go:171] initial CRD sync complete...
	I0408 23:32:58.199615       1 autoregister_controller.go:144] Starting autoregister controller
	I0408 23:32:58.199643       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0408 23:32:58.199660       1 cache.go:39] Caches are synced for autoregister controller
	I0408 23:32:58.211321       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0408 23:32:58.222785       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0408 23:32:58.222817       1 policy_source.go:240] refreshing policies
	I0408 23:32:58.283040       1 controller.go:615] quota admission added evaluator for: namespaces
	I0408 23:32:58.306611       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0408 23:32:59.098650       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0408 23:32:59.103499       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0408 23:32:59.103526       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0408 23:32:59.724987       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0408 23:32:59.760186       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0408 23:32:59.805731       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0408 23:32:59.811676       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.234]
	I0408 23:32:59.812438       1 controller.go:615] quota admission added evaluator for: endpoints
	I0408 23:32:59.816427       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0408 23:33:03.515203       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0408 23:33:03.521214       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0408 23:33:03.534674       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0408 23:33:03.551989       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-apiserver [a49d33808640c366a7d5443834c9e5e5f2248bf901344434fbe3ea7884636c8a] <==
	I0408 23:29:07.882975       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0408 23:29:08.385321       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0408 23:29:08.386208       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0408 23:29:08.386892       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0408 23:29:08.395803       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0408 23:29:08.402128       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0408 23:29:08.402176       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0408 23:29:08.402368       1 instance.go:233] Using reconciler: lease
	W0408 23:29:08.403175       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0408 23:29:09.385863       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0408 23:29:09.387525       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0408 23:29:09.403805       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0408 23:29:10.916773       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0408 23:29:11.127730       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0408 23:29:11.236841       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0408 23:29:13.644280       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0408 23:29:14.001608       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0408 23:29:14.080114       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0408 23:29:17.316540       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0408 23:29:17.332217       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0408 23:29:18.566823       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0408 23:29:23.341139       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0408 23:29:24.712048       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0408 23:29:25.880622       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0408 23:29:28.403920       1 instance.go:226] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [1661ef4bbc8d9bd87ace82439a6240cc02dae018dd90cc00e736742a658aafc6] <==
	I0408 23:29:21.422680       1 serving.go:386] Generated self-signed cert in-memory
	I0408 23:29:21.897199       1 controllermanager.go:185] "Starting" version="v1.32.2"
	I0408 23:29:21.897292       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 23:29:21.899219       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0408 23:29:21.899402       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0408 23:29:21.899429       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0408 23:29:21.899733       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0408 23:29:39.413030       1 controllermanager.go:230] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.234:8441/healthz\": dial tcp 192.168.39.234:8441: connect: connection refused"
	
	
	==> kube-scheduler [1d548bdb16c0b53538b9fedb82ea9a70267ea61b01cdefe0a01ad1ce451cbe9e] <==
	I0408 23:31:22.443063       1 serving.go:386] Generated self-signed cert in-memory
	W0408 23:31:22.902312       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.39.234:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.39.234:8441: connect: connection refused
	W0408 23:31:22.902396       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0408 23:31:22.902421       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0408 23:31:22.907581       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0408 23:31:22.907650       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0408 23:31:22.907691       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0408 23:31:22.909477       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0408 23:31:22.909532       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0408 23:31:22.909591       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	I0408 23:31:22.909834       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0408 23:31:22.909914       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0408 23:31:22.910073       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0408 23:31:22.910172       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0408 23:31:22.910521       1 server.go:266] "waiting for handlers to sync" err="context canceled"
	I0408 23:31:22.910624       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0408 23:31:22.910701       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a4cb3b5dc26bb31dd5c7642a3b929bc48601a83dc0567ed0f3ba1ab49d63cb58] <==
	W0408 23:32:58.227805       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0408 23:32:58.227831       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0408 23:32:58.227905       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0408 23:32:58.227929       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0408 23:32:59.047784       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0408 23:32:59.047833       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0408 23:32:59.117723       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0408 23:32:59.117843       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0408 23:32:59.130407       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0408 23:32:59.130873       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0408 23:32:59.157517       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0408 23:32:59.157648       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0408 23:32:59.203841       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0408 23:32:59.203873       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0408 23:32:59.215630       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0408 23:32:59.215714       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0408 23:32:59.303332       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0408 23:32:59.303442       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0408 23:32:59.321481       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0408 23:32:59.321523       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0408 23:32:59.409015       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0408 23:32:59.409110       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0408 23:32:59.474734       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0408 23:32:59.474778       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0408 23:33:01.711501       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 08 23:33:01 functional-546336 kubelet[12657]: E0408 23:33:01.297712   12657 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-functional-546336_kube-system(58759ca9d0ee331612ac601ca4858681)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-functional-546336_kube-system(58759ca9d0ee331612ac601ca4858681)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-controller-manager-functional-546336_kube-system_58759ca9d0ee331612ac601ca4858681_1\\\" already exists\"" pod="kube-system/kube-controller-manager-functional-546336" podUID="58759ca9d0ee331612ac601ca4858681"
	Apr 08 23:33:02 functional-546336 kubelet[12657]: I0408 23:33:02.663303   12657 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-functional-546336"
	Apr 08 23:33:02 functional-546336 kubelet[12657]: E0408 23:33:02.907789   12657 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-controller-manager-functional-546336_kube-system_58759ca9d0ee331612ac601ca4858681_1\" already exists"
	Apr 08 23:33:02 functional-546336 kubelet[12657]: E0408 23:33:02.907891   12657 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-controller-manager-functional-546336_kube-system_58759ca9d0ee331612ac601ca4858681_1\" already exists" pod="kube-system/kube-controller-manager-functional-546336"
	Apr 08 23:33:02 functional-546336 kubelet[12657]: E0408 23:33:02.907964   12657 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-controller-manager-functional-546336_kube-system_58759ca9d0ee331612ac601ca4858681_1\" already exists" pod="kube-system/kube-controller-manager-functional-546336"
	Apr 08 23:33:02 functional-546336 kubelet[12657]: E0408 23:33:02.908042   12657 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-functional-546336_kube-system(58759ca9d0ee331612ac601ca4858681)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-functional-546336_kube-system(58759ca9d0ee331612ac601ca4858681)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-controller-manager-functional-546336_kube-system_58759ca9d0ee331612ac601ca4858681_1\\\" already exists\"" pod="kube-system/kube-controller-manager-functional-546336" podUID="58759ca9d0ee331612ac601ca4858681"
	Apr 08 23:33:03 functional-546336 kubelet[12657]: I0408 23:33:03.541751   12657 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-functional-546336"
	Apr 08 23:33:05 functional-546336 kubelet[12657]: I0408 23:33:05.205693   12657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-functional-546336" podStartSLOduration=3.205677671 podStartE2EDuration="3.205677671s" podCreationTimestamp="2025-04-08 23:33:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-08 23:33:05.204915457 +0000 UTC m=+10.161275075" watchObservedRunningTime="2025-04-08 23:33:05.205677671 +0000 UTC m=+10.162037288"
	Apr 08 23:33:05 functional-546336 kubelet[12657]: E0408 23:33:05.246783   12657 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744155185246297111,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157169,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 08 23:33:05 functional-546336 kubelet[12657]: E0408 23:33:05.246805   12657 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744155185246297111,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157169,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 08 23:33:05 functional-546336 kubelet[12657]: I0408 23:33:05.298497   12657 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-functional-546336"
	Apr 08 23:33:05 functional-546336 kubelet[12657]: I0408 23:33:05.305759   12657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-546336" podStartSLOduration=2.305744314 podStartE2EDuration="2.305744314s" podCreationTimestamp="2025-04-08 23:33:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-08 23:33:05.217199336 +0000 UTC m=+10.173558953" watchObservedRunningTime="2025-04-08 23:33:05.305744314 +0000 UTC m=+10.262103931"
	Apr 08 23:33:06 functional-546336 kubelet[12657]: I0408 23:33:06.260224   12657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-functional-546336" podStartSLOduration=1.260209473 podStartE2EDuration="1.260209473s" podCreationTimestamp="2025-04-08 23:33:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-08 23:33:05.318215816 +0000 UTC m=+10.274575433" watchObservedRunningTime="2025-04-08 23:33:06.260209473 +0000 UTC m=+11.216569090"
	Apr 08 23:33:15 functional-546336 kubelet[12657]: E0408 23:33:15.248836   12657 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744155195248243175,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157169,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 08 23:33:15 functional-546336 kubelet[12657]: E0408 23:33:15.248859   12657 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744155195248243175,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157169,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 08 23:33:17 functional-546336 kubelet[12657]: E0408 23:33:17.171523   12657 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-controller-manager-functional-546336_kube-system_58759ca9d0ee331612ac601ca4858681_1\" already exists"
	Apr 08 23:33:17 functional-546336 kubelet[12657]: E0408 23:33:17.171878   12657 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-controller-manager-functional-546336_kube-system_58759ca9d0ee331612ac601ca4858681_1\" already exists" pod="kube-system/kube-controller-manager-functional-546336"
	Apr 08 23:33:17 functional-546336 kubelet[12657]: E0408 23:33:17.171931   12657 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-controller-manager-functional-546336_kube-system_58759ca9d0ee331612ac601ca4858681_1\" already exists" pod="kube-system/kube-controller-manager-functional-546336"
	Apr 08 23:33:17 functional-546336 kubelet[12657]: E0408 23:33:17.172010   12657 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-functional-546336_kube-system(58759ca9d0ee331612ac601ca4858681)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-functional-546336_kube-system(58759ca9d0ee331612ac601ca4858681)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-controller-manager-functional-546336_kube-system_58759ca9d0ee331612ac601ca4858681_1\\\" already exists\"" pod="kube-system/kube-controller-manager-functional-546336" podUID="58759ca9d0ee331612ac601ca4858681"
	Apr 08 23:33:25 functional-546336 kubelet[12657]: E0408 23:33:25.250256   12657 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744155205249865984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157169,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 08 23:33:25 functional-546336 kubelet[12657]: E0408 23:33:25.250621   12657 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744155205249865984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157169,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 08 23:33:30 functional-546336 kubelet[12657]: E0408 23:33:30.171094   12657 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-controller-manager-functional-546336_kube-system_58759ca9d0ee331612ac601ca4858681_1\" already exists"
	Apr 08 23:33:30 functional-546336 kubelet[12657]: E0408 23:33:30.171378   12657 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-controller-manager-functional-546336_kube-system_58759ca9d0ee331612ac601ca4858681_1\" already exists" pod="kube-system/kube-controller-manager-functional-546336"
	Apr 08 23:33:30 functional-546336 kubelet[12657]: E0408 23:33:30.171449   12657 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-controller-manager-functional-546336_kube-system_58759ca9d0ee331612ac601ca4858681_1\" already exists" pod="kube-system/kube-controller-manager-functional-546336"
	Apr 08 23:33:30 functional-546336 kubelet[12657]: E0408 23:33:30.171606   12657 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-functional-546336_kube-system(58759ca9d0ee331612ac601ca4858681)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-functional-546336_kube-system(58759ca9d0ee331612ac601ca4858681)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-controller-manager-functional-546336_kube-system_58759ca9d0ee331612ac601ca4858681_1\\\" already exists\"" pod="kube-system/kube-controller-manager-functional-546336" podUID="58759ca9d0ee331612ac601ca4858681"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-546336 -n functional-546336
helpers_test.go:261: (dbg) Run:  kubectl --context functional-546336 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/ExtraConfig FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/ExtraConfig (135.44s)

                                                
                                    
x
+
TestFunctional/parallel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel
functional_test.go:186: Unable to run more tests (deadline exceeded)
--- FAIL: TestFunctional/parallel (0.00s)

                                                
                                    
x
+
TestPreload (164.94s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-623381 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0409 00:16:01.017466   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-623381 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m24.926715832s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-623381 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-623381 image pull gcr.io/k8s-minikube/busybox: (3.513340876s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-623381
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-623381: (6.614863987s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-623381 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-623381 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m7.080478977s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-623381 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:631: *** TestPreload FAILED at 2025-04-09 00:17:26.629850404 +0000 UTC m=+5541.534889629
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-623381 -n test-preload-623381
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-623381 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-623381 logs -n 25: (1.010191501s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-072032 ssh -n                                                                 | multinode-072032     | jenkins | v1.35.0 | 09 Apr 25 00:02 UTC | 09 Apr 25 00:02 UTC |
	|         | multinode-072032-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-072032 ssh -n multinode-072032 sudo cat                                       | multinode-072032     | jenkins | v1.35.0 | 09 Apr 25 00:02 UTC | 09 Apr 25 00:02 UTC |
	|         | /home/docker/cp-test_multinode-072032-m03_multinode-072032.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-072032 cp multinode-072032-m03:/home/docker/cp-test.txt                       | multinode-072032     | jenkins | v1.35.0 | 09 Apr 25 00:02 UTC | 09 Apr 25 00:02 UTC |
	|         | multinode-072032-m02:/home/docker/cp-test_multinode-072032-m03_multinode-072032-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-072032 ssh -n                                                                 | multinode-072032     | jenkins | v1.35.0 | 09 Apr 25 00:02 UTC | 09 Apr 25 00:02 UTC |
	|         | multinode-072032-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-072032 ssh -n multinode-072032-m02 sudo cat                                   | multinode-072032     | jenkins | v1.35.0 | 09 Apr 25 00:02 UTC | 09 Apr 25 00:02 UTC |
	|         | /home/docker/cp-test_multinode-072032-m03_multinode-072032-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-072032 node stop m03                                                          | multinode-072032     | jenkins | v1.35.0 | 09 Apr 25 00:02 UTC | 09 Apr 25 00:02 UTC |
	| node    | multinode-072032 node start                                                             | multinode-072032     | jenkins | v1.35.0 | 09 Apr 25 00:02 UTC | 09 Apr 25 00:03 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-072032                                                                | multinode-072032     | jenkins | v1.35.0 | 09 Apr 25 00:03 UTC |                     |
	| stop    | -p multinode-072032                                                                     | multinode-072032     | jenkins | v1.35.0 | 09 Apr 25 00:03 UTC | 09 Apr 25 00:06 UTC |
	| start   | -p multinode-072032                                                                     | multinode-072032     | jenkins | v1.35.0 | 09 Apr 25 00:06 UTC | 09 Apr 25 00:08 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-072032                                                                | multinode-072032     | jenkins | v1.35.0 | 09 Apr 25 00:08 UTC |                     |
	| node    | multinode-072032 node delete                                                            | multinode-072032     | jenkins | v1.35.0 | 09 Apr 25 00:08 UTC | 09 Apr 25 00:08 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-072032 stop                                                                   | multinode-072032     | jenkins | v1.35.0 | 09 Apr 25 00:08 UTC | 09 Apr 25 00:12 UTC |
	| start   | -p multinode-072032                                                                     | multinode-072032     | jenkins | v1.35.0 | 09 Apr 25 00:12 UTC | 09 Apr 25 00:13 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-072032                                                                | multinode-072032     | jenkins | v1.35.0 | 09 Apr 25 00:13 UTC |                     |
	| start   | -p multinode-072032-m02                                                                 | multinode-072032-m02 | jenkins | v1.35.0 | 09 Apr 25 00:13 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-072032-m03                                                                 | multinode-072032-m03 | jenkins | v1.35.0 | 09 Apr 25 00:13 UTC | 09 Apr 25 00:14 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-072032                                                                 | multinode-072032     | jenkins | v1.35.0 | 09 Apr 25 00:14 UTC |                     |
	| delete  | -p multinode-072032-m03                                                                 | multinode-072032-m03 | jenkins | v1.35.0 | 09 Apr 25 00:14 UTC | 09 Apr 25 00:14 UTC |
	| delete  | -p multinode-072032                                                                     | multinode-072032     | jenkins | v1.35.0 | 09 Apr 25 00:14 UTC | 09 Apr 25 00:14 UTC |
	| start   | -p test-preload-623381                                                                  | test-preload-623381  | jenkins | v1.35.0 | 09 Apr 25 00:14 UTC | 09 Apr 25 00:16 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-623381 image pull                                                          | test-preload-623381  | jenkins | v1.35.0 | 09 Apr 25 00:16 UTC | 09 Apr 25 00:16 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-623381                                                                  | test-preload-623381  | jenkins | v1.35.0 | 09 Apr 25 00:16 UTC | 09 Apr 25 00:16 UTC |
	| start   | -p test-preload-623381                                                                  | test-preload-623381  | jenkins | v1.35.0 | 09 Apr 25 00:16 UTC | 09 Apr 25 00:17 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-623381 image list                                                          | test-preload-623381  | jenkins | v1.35.0 | 09 Apr 25 00:17 UTC | 09 Apr 25 00:17 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/09 00:16:19
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0409 00:16:19.375549   52680 out.go:345] Setting OutFile to fd 1 ...
	I0409 00:16:19.376274   52680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0409 00:16:19.376295   52680 out.go:358] Setting ErrFile to fd 2...
	I0409 00:16:19.376303   52680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0409 00:16:19.376736   52680 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20501-9125/.minikube/bin
	I0409 00:16:19.377785   52680 out.go:352] Setting JSON to false
	I0409 00:16:19.378660   52680 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7124,"bootTime":1744150655,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0409 00:16:19.378770   52680 start.go:139] virtualization: kvm guest
	I0409 00:16:19.380662   52680 out.go:177] * [test-preload-623381] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0409 00:16:19.382159   52680 notify.go:220] Checking for updates...
	I0409 00:16:19.382178   52680 out.go:177]   - MINIKUBE_LOCATION=20501
	I0409 00:16:19.383458   52680 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0409 00:16:19.384727   52680 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20501-9125/kubeconfig
	I0409 00:16:19.385894   52680 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20501-9125/.minikube
	I0409 00:16:19.387043   52680 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0409 00:16:19.388111   52680 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0409 00:16:19.389582   52680 config.go:182] Loaded profile config "test-preload-623381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0409 00:16:19.390078   52680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0409 00:16:19.390181   52680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0409 00:16:19.404848   52680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37541
	I0409 00:16:19.405329   52680 main.go:141] libmachine: () Calling .GetVersion
	I0409 00:16:19.405847   52680 main.go:141] libmachine: Using API Version  1
	I0409 00:16:19.405876   52680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0409 00:16:19.406221   52680 main.go:141] libmachine: () Calling .GetMachineName
	I0409 00:16:19.406371   52680 main.go:141] libmachine: (test-preload-623381) Calling .DriverName
	I0409 00:16:19.408008   52680 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0409 00:16:19.409044   52680 driver.go:404] Setting default libvirt URI to qemu:///system
	I0409 00:16:19.409325   52680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0409 00:16:19.409366   52680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0409 00:16:19.423581   52680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34009
	I0409 00:16:19.423942   52680 main.go:141] libmachine: () Calling .GetVersion
	I0409 00:16:19.424342   52680 main.go:141] libmachine: Using API Version  1
	I0409 00:16:19.424357   52680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0409 00:16:19.424648   52680 main.go:141] libmachine: () Calling .GetMachineName
	I0409 00:16:19.424825   52680 main.go:141] libmachine: (test-preload-623381) Calling .DriverName
	I0409 00:16:19.459347   52680 out.go:177] * Using the kvm2 driver based on existing profile
	I0409 00:16:19.460376   52680 start.go:297] selected driver: kvm2
	I0409 00:16:19.460397   52680 start.go:901] validating driver "kvm2" against &{Name:test-preload-623381 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 Cluster
Name:test-preload-623381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0409 00:16:19.460530   52680 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0409 00:16:19.461661   52680 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0409 00:16:19.461753   52680 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20501-9125/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0409 00:16:19.476317   52680 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0409 00:16:19.476650   52680 start_flags.go:975] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0409 00:16:19.476681   52680 cni.go:84] Creating CNI manager for ""
	I0409 00:16:19.476721   52680 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0409 00:16:19.476773   52680 start.go:340] cluster config:
	{Name:test-preload-623381 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-623381 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0409 00:16:19.476867   52680 iso.go:125] acquiring lock: {Name:mk618477bad490b102618c53c9c8c6b34f33ce81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0409 00:16:19.478445   52680 out.go:177] * Starting "test-preload-623381" primary control-plane node in "test-preload-623381" cluster
	I0409 00:16:19.479453   52680 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0409 00:16:19.579378   52680 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0409 00:16:19.579409   52680 cache.go:56] Caching tarball of preloaded images
	I0409 00:16:19.579592   52680 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0409 00:16:19.581326   52680 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0409 00:16:19.582524   52680 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0409 00:16:19.679618   52680 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20501-9125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0409 00:16:31.399036   52680 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0409 00:16:31.399128   52680 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20501-9125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0409 00:16:32.237136   52680 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0409 00:16:32.237248   52680 profile.go:143] Saving config to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/test-preload-623381/config.json ...
	I0409 00:16:32.237471   52680 start.go:360] acquireMachinesLock for test-preload-623381: {Name:mke7be7b51cfddf557a39ecf6493fff6a1168ec9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0409 00:16:32.237532   52680 start.go:364] duration metric: took 39.804µs to acquireMachinesLock for "test-preload-623381"
	I0409 00:16:32.237549   52680 start.go:96] Skipping create...Using existing machine configuration
	I0409 00:16:32.237557   52680 fix.go:54] fixHost starting: 
	I0409 00:16:32.237808   52680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0409 00:16:32.237838   52680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0409 00:16:32.252172   52680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38473
	I0409 00:16:32.252633   52680 main.go:141] libmachine: () Calling .GetVersion
	I0409 00:16:32.253002   52680 main.go:141] libmachine: Using API Version  1
	I0409 00:16:32.253024   52680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0409 00:16:32.253310   52680 main.go:141] libmachine: () Calling .GetMachineName
	I0409 00:16:32.253487   52680 main.go:141] libmachine: (test-preload-623381) Calling .DriverName
	I0409 00:16:32.253600   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetState
	I0409 00:16:32.255251   52680 fix.go:112] recreateIfNeeded on test-preload-623381: state=Stopped err=<nil>
	I0409 00:16:32.255276   52680 main.go:141] libmachine: (test-preload-623381) Calling .DriverName
	W0409 00:16:32.255407   52680 fix.go:138] unexpected machine state, will restart: <nil>
	I0409 00:16:32.257457   52680 out.go:177] * Restarting existing kvm2 VM for "test-preload-623381" ...
	I0409 00:16:32.258754   52680 main.go:141] libmachine: (test-preload-623381) Calling .Start
	I0409 00:16:32.258982   52680 main.go:141] libmachine: (test-preload-623381) starting domain...
	I0409 00:16:32.259001   52680 main.go:141] libmachine: (test-preload-623381) ensuring networks are active...
	I0409 00:16:32.259631   52680 main.go:141] libmachine: (test-preload-623381) Ensuring network default is active
	I0409 00:16:32.259885   52680 main.go:141] libmachine: (test-preload-623381) Ensuring network mk-test-preload-623381 is active
	I0409 00:16:32.260237   52680 main.go:141] libmachine: (test-preload-623381) getting domain XML...
	I0409 00:16:32.261063   52680 main.go:141] libmachine: (test-preload-623381) creating domain...
	I0409 00:16:33.458940   52680 main.go:141] libmachine: (test-preload-623381) waiting for IP...
	I0409 00:16:33.459839   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:33.460281   52680 main.go:141] libmachine: (test-preload-623381) DBG | unable to find current IP address of domain test-preload-623381 in network mk-test-preload-623381
	I0409 00:16:33.460335   52680 main.go:141] libmachine: (test-preload-623381) DBG | I0409 00:16:33.460261   52764 retry.go:31] will retry after 219.059805ms: waiting for domain to come up
	I0409 00:16:33.680755   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:33.681212   52680 main.go:141] libmachine: (test-preload-623381) DBG | unable to find current IP address of domain test-preload-623381 in network mk-test-preload-623381
	I0409 00:16:33.681239   52680 main.go:141] libmachine: (test-preload-623381) DBG | I0409 00:16:33.681173   52764 retry.go:31] will retry after 240.257748ms: waiting for domain to come up
	I0409 00:16:33.922776   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:33.923239   52680 main.go:141] libmachine: (test-preload-623381) DBG | unable to find current IP address of domain test-preload-623381 in network mk-test-preload-623381
	I0409 00:16:33.923266   52680 main.go:141] libmachine: (test-preload-623381) DBG | I0409 00:16:33.923203   52764 retry.go:31] will retry after 454.075135ms: waiting for domain to come up
	I0409 00:16:34.378802   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:34.379337   52680 main.go:141] libmachine: (test-preload-623381) DBG | unable to find current IP address of domain test-preload-623381 in network mk-test-preload-623381
	I0409 00:16:34.379367   52680 main.go:141] libmachine: (test-preload-623381) DBG | I0409 00:16:34.379282   52764 retry.go:31] will retry after 579.314455ms: waiting for domain to come up
	I0409 00:16:34.960127   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:34.960528   52680 main.go:141] libmachine: (test-preload-623381) DBG | unable to find current IP address of domain test-preload-623381 in network mk-test-preload-623381
	I0409 00:16:34.960548   52680 main.go:141] libmachine: (test-preload-623381) DBG | I0409 00:16:34.960497   52764 retry.go:31] will retry after 552.91099ms: waiting for domain to come up
	I0409 00:16:35.515223   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:35.515609   52680 main.go:141] libmachine: (test-preload-623381) DBG | unable to find current IP address of domain test-preload-623381 in network mk-test-preload-623381
	I0409 00:16:35.515697   52680 main.go:141] libmachine: (test-preload-623381) DBG | I0409 00:16:35.515594   52764 retry.go:31] will retry after 844.266223ms: waiting for domain to come up
	I0409 00:16:36.361634   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:36.362127   52680 main.go:141] libmachine: (test-preload-623381) DBG | unable to find current IP address of domain test-preload-623381 in network mk-test-preload-623381
	I0409 00:16:36.362149   52680 main.go:141] libmachine: (test-preload-623381) DBG | I0409 00:16:36.362084   52764 retry.go:31] will retry after 1.119991224s: waiting for domain to come up
	I0409 00:16:37.483651   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:37.484105   52680 main.go:141] libmachine: (test-preload-623381) DBG | unable to find current IP address of domain test-preload-623381 in network mk-test-preload-623381
	I0409 00:16:37.484134   52680 main.go:141] libmachine: (test-preload-623381) DBG | I0409 00:16:37.484056   52764 retry.go:31] will retry after 1.243706124s: waiting for domain to come up
	I0409 00:16:38.729662   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:38.730029   52680 main.go:141] libmachine: (test-preload-623381) DBG | unable to find current IP address of domain test-preload-623381 in network mk-test-preload-623381
	I0409 00:16:38.730061   52680 main.go:141] libmachine: (test-preload-623381) DBG | I0409 00:16:38.730015   52764 retry.go:31] will retry after 1.757774184s: waiting for domain to come up
	I0409 00:16:40.490157   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:40.490502   52680 main.go:141] libmachine: (test-preload-623381) DBG | unable to find current IP address of domain test-preload-623381 in network mk-test-preload-623381
	I0409 00:16:40.490525   52680 main.go:141] libmachine: (test-preload-623381) DBG | I0409 00:16:40.490464   52764 retry.go:31] will retry after 1.417002223s: waiting for domain to come up
	I0409 00:16:41.909078   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:41.909519   52680 main.go:141] libmachine: (test-preload-623381) DBG | unable to find current IP address of domain test-preload-623381 in network mk-test-preload-623381
	I0409 00:16:41.909549   52680 main.go:141] libmachine: (test-preload-623381) DBG | I0409 00:16:41.909485   52764 retry.go:31] will retry after 2.568760988s: waiting for domain to come up
	I0409 00:16:44.480749   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:44.481161   52680 main.go:141] libmachine: (test-preload-623381) DBG | unable to find current IP address of domain test-preload-623381 in network mk-test-preload-623381
	I0409 00:16:44.481180   52680 main.go:141] libmachine: (test-preload-623381) DBG | I0409 00:16:44.481128   52764 retry.go:31] will retry after 2.549476931s: waiting for domain to come up
	I0409 00:16:47.033796   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:47.034148   52680 main.go:141] libmachine: (test-preload-623381) DBG | unable to find current IP address of domain test-preload-623381 in network mk-test-preload-623381
	I0409 00:16:47.034184   52680 main.go:141] libmachine: (test-preload-623381) DBG | I0409 00:16:47.034116   52764 retry.go:31] will retry after 3.981497959s: waiting for domain to come up
	I0409 00:16:51.016753   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:51.017243   52680 main.go:141] libmachine: (test-preload-623381) found domain IP: 192.168.39.104
	I0409 00:16:51.017266   52680 main.go:141] libmachine: (test-preload-623381) reserving static IP address...
	I0409 00:16:51.017282   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has current primary IP address 192.168.39.104 and MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:51.017616   52680 main.go:141] libmachine: (test-preload-623381) DBG | found host DHCP lease matching {name: "test-preload-623381", mac: "52:54:00:dd:9e:95", ip: "192.168.39.104"} in network mk-test-preload-623381: {Iface:virbr1 ExpiryTime:2025-04-09 01:16:42 +0000 UTC Type:0 Mac:52:54:00:dd:9e:95 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:test-preload-623381 Clientid:01:52:54:00:dd:9e:95}
	I0409 00:16:51.017649   52680 main.go:141] libmachine: (test-preload-623381) DBG | skip adding static IP to network mk-test-preload-623381 - found existing host DHCP lease matching {name: "test-preload-623381", mac: "52:54:00:dd:9e:95", ip: "192.168.39.104"}
	I0409 00:16:51.017664   52680 main.go:141] libmachine: (test-preload-623381) reserved static IP address 192.168.39.104 for domain test-preload-623381
	I0409 00:16:51.017682   52680 main.go:141] libmachine: (test-preload-623381) waiting for SSH...
	I0409 00:16:51.017695   52680 main.go:141] libmachine: (test-preload-623381) DBG | Getting to WaitForSSH function...
	I0409 00:16:51.019518   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:51.019902   52680 main.go:141] libmachine: (test-preload-623381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:9e:95", ip: ""} in network mk-test-preload-623381: {Iface:virbr1 ExpiryTime:2025-04-09 01:16:42 +0000 UTC Type:0 Mac:52:54:00:dd:9e:95 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:test-preload-623381 Clientid:01:52:54:00:dd:9e:95}
	I0409 00:16:51.019929   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined IP address 192.168.39.104 and MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:51.020080   52680 main.go:141] libmachine: (test-preload-623381) DBG | Using SSH client type: external
	I0409 00:16:51.020138   52680 main.go:141] libmachine: (test-preload-623381) DBG | Using SSH private key: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/test-preload-623381/id_rsa (-rw-------)
	I0409 00:16:51.020180   52680 main.go:141] libmachine: (test-preload-623381) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20501-9125/.minikube/machines/test-preload-623381/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0409 00:16:51.020197   52680 main.go:141] libmachine: (test-preload-623381) DBG | About to run SSH command:
	I0409 00:16:51.020207   52680 main.go:141] libmachine: (test-preload-623381) DBG | exit 0
	I0409 00:16:51.148058   52680 main.go:141] libmachine: (test-preload-623381) DBG | SSH cmd err, output: <nil>: 
	I0409 00:16:51.148404   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetConfigRaw
	I0409 00:16:51.149036   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetIP
	I0409 00:16:51.151425   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:51.151729   52680 main.go:141] libmachine: (test-preload-623381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:9e:95", ip: ""} in network mk-test-preload-623381: {Iface:virbr1 ExpiryTime:2025-04-09 01:16:42 +0000 UTC Type:0 Mac:52:54:00:dd:9e:95 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:test-preload-623381 Clientid:01:52:54:00:dd:9e:95}
	I0409 00:16:51.151760   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined IP address 192.168.39.104 and MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:51.152029   52680 profile.go:143] Saving config to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/test-preload-623381/config.json ...
	I0409 00:16:51.152272   52680 machine.go:93] provisionDockerMachine start ...
	I0409 00:16:51.152297   52680 main.go:141] libmachine: (test-preload-623381) Calling .DriverName
	I0409 00:16:51.152481   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHHostname
	I0409 00:16:51.154596   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:51.154898   52680 main.go:141] libmachine: (test-preload-623381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:9e:95", ip: ""} in network mk-test-preload-623381: {Iface:virbr1 ExpiryTime:2025-04-09 01:16:42 +0000 UTC Type:0 Mac:52:54:00:dd:9e:95 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:test-preload-623381 Clientid:01:52:54:00:dd:9e:95}
	I0409 00:16:51.154948   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined IP address 192.168.39.104 and MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:51.155112   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHPort
	I0409 00:16:51.155327   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHKeyPath
	I0409 00:16:51.155508   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHKeyPath
	I0409 00:16:51.155632   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHUsername
	I0409 00:16:51.155792   52680 main.go:141] libmachine: Using SSH client type: native
	I0409 00:16:51.156089   52680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0409 00:16:51.156103   52680 main.go:141] libmachine: About to run SSH command:
	hostname
	I0409 00:16:51.267825   52680 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0409 00:16:51.267848   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetMachineName
	I0409 00:16:51.268118   52680 buildroot.go:166] provisioning hostname "test-preload-623381"
	I0409 00:16:51.268150   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetMachineName
	I0409 00:16:51.268314   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHHostname
	I0409 00:16:51.270870   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:51.271255   52680 main.go:141] libmachine: (test-preload-623381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:9e:95", ip: ""} in network mk-test-preload-623381: {Iface:virbr1 ExpiryTime:2025-04-09 01:16:42 +0000 UTC Type:0 Mac:52:54:00:dd:9e:95 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:test-preload-623381 Clientid:01:52:54:00:dd:9e:95}
	I0409 00:16:51.271285   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined IP address 192.168.39.104 and MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:51.271412   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHPort
	I0409 00:16:51.271658   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHKeyPath
	I0409 00:16:51.271791   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHKeyPath
	I0409 00:16:51.271956   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHUsername
	I0409 00:16:51.272090   52680 main.go:141] libmachine: Using SSH client type: native
	I0409 00:16:51.272269   52680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0409 00:16:51.272280   52680 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-623381 && echo "test-preload-623381" | sudo tee /etc/hostname
	I0409 00:16:51.397922   52680 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-623381
	
	I0409 00:16:51.397958   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHHostname
	I0409 00:16:51.400587   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:51.400916   52680 main.go:141] libmachine: (test-preload-623381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:9e:95", ip: ""} in network mk-test-preload-623381: {Iface:virbr1 ExpiryTime:2025-04-09 01:16:42 +0000 UTC Type:0 Mac:52:54:00:dd:9e:95 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:test-preload-623381 Clientid:01:52:54:00:dd:9e:95}
	I0409 00:16:51.400943   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined IP address 192.168.39.104 and MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:51.401101   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHPort
	I0409 00:16:51.401286   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHKeyPath
	I0409 00:16:51.401454   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHKeyPath
	I0409 00:16:51.401569   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHUsername
	I0409 00:16:51.401719   52680 main.go:141] libmachine: Using SSH client type: native
	I0409 00:16:51.401922   52680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0409 00:16:51.401940   52680 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-623381' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-623381/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-623381' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0409 00:16:51.520141   52680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0409 00:16:51.520173   52680 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20501-9125/.minikube CaCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20501-9125/.minikube}
	I0409 00:16:51.520192   52680 buildroot.go:174] setting up certificates
	I0409 00:16:51.520202   52680 provision.go:84] configureAuth start
	I0409 00:16:51.520210   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetMachineName
	I0409 00:16:51.520479   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetIP
	I0409 00:16:51.522891   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:51.523156   52680 main.go:141] libmachine: (test-preload-623381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:9e:95", ip: ""} in network mk-test-preload-623381: {Iface:virbr1 ExpiryTime:2025-04-09 01:16:42 +0000 UTC Type:0 Mac:52:54:00:dd:9e:95 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:test-preload-623381 Clientid:01:52:54:00:dd:9e:95}
	I0409 00:16:51.523211   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined IP address 192.168.39.104 and MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:51.523298   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHHostname
	I0409 00:16:51.525327   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:51.525632   52680 main.go:141] libmachine: (test-preload-623381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:9e:95", ip: ""} in network mk-test-preload-623381: {Iface:virbr1 ExpiryTime:2025-04-09 01:16:42 +0000 UTC Type:0 Mac:52:54:00:dd:9e:95 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:test-preload-623381 Clientid:01:52:54:00:dd:9e:95}
	I0409 00:16:51.525663   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined IP address 192.168.39.104 and MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:51.525734   52680 provision.go:143] copyHostCerts
	I0409 00:16:51.525797   52680 exec_runner.go:144] found /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem, removing ...
	I0409 00:16:51.525819   52680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem
	I0409 00:16:51.525906   52680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem (1082 bytes)
	I0409 00:16:51.526052   52680 exec_runner.go:144] found /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem, removing ...
	I0409 00:16:51.526066   52680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem
	I0409 00:16:51.526101   52680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem (1123 bytes)
	I0409 00:16:51.526177   52680 exec_runner.go:144] found /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem, removing ...
	I0409 00:16:51.526186   52680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem
	I0409 00:16:51.526219   52680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem (1675 bytes)
	I0409 00:16:51.526281   52680 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem org=jenkins.test-preload-623381 san=[127.0.0.1 192.168.39.104 localhost minikube test-preload-623381]
	I0409 00:16:51.668956   52680 provision.go:177] copyRemoteCerts
	I0409 00:16:51.669006   52680 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0409 00:16:51.669028   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHHostname
	I0409 00:16:51.671760   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:51.672133   52680 main.go:141] libmachine: (test-preload-623381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:9e:95", ip: ""} in network mk-test-preload-623381: {Iface:virbr1 ExpiryTime:2025-04-09 01:16:42 +0000 UTC Type:0 Mac:52:54:00:dd:9e:95 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:test-preload-623381 Clientid:01:52:54:00:dd:9e:95}
	I0409 00:16:51.672158   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined IP address 192.168.39.104 and MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:51.672332   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHPort
	I0409 00:16:51.672506   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHKeyPath
	I0409 00:16:51.672669   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHUsername
	I0409 00:16:51.672799   52680 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/test-preload-623381/id_rsa Username:docker}
	I0409 00:16:51.757708   52680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0409 00:16:51.780245   52680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0409 00:16:51.801555   52680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0409 00:16:51.823699   52680 provision.go:87] duration metric: took 303.484165ms to configureAuth
	I0409 00:16:51.823731   52680 buildroot.go:189] setting minikube options for container-runtime
	I0409 00:16:51.823976   52680 config.go:182] Loaded profile config "test-preload-623381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0409 00:16:51.824048   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHHostname
	I0409 00:16:51.826767   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:51.827199   52680 main.go:141] libmachine: (test-preload-623381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:9e:95", ip: ""} in network mk-test-preload-623381: {Iface:virbr1 ExpiryTime:2025-04-09 01:16:42 +0000 UTC Type:0 Mac:52:54:00:dd:9e:95 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:test-preload-623381 Clientid:01:52:54:00:dd:9e:95}
	I0409 00:16:51.827229   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined IP address 192.168.39.104 and MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:51.827480   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHPort
	I0409 00:16:51.827666   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHKeyPath
	I0409 00:16:51.827828   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHKeyPath
	I0409 00:16:51.828011   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHUsername
	I0409 00:16:51.828155   52680 main.go:141] libmachine: Using SSH client type: native
	I0409 00:16:51.828357   52680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0409 00:16:51.828373   52680 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0409 00:16:52.057368   52680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0409 00:16:52.057392   52680 machine.go:96] duration metric: took 905.106744ms to provisionDockerMachine
	I0409 00:16:52.057403   52680 start.go:293] postStartSetup for "test-preload-623381" (driver="kvm2")
	I0409 00:16:52.057418   52680 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0409 00:16:52.057438   52680 main.go:141] libmachine: (test-preload-623381) Calling .DriverName
	I0409 00:16:52.057751   52680 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0409 00:16:52.057785   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHHostname
	I0409 00:16:52.060994   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:52.061412   52680 main.go:141] libmachine: (test-preload-623381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:9e:95", ip: ""} in network mk-test-preload-623381: {Iface:virbr1 ExpiryTime:2025-04-09 01:16:42 +0000 UTC Type:0 Mac:52:54:00:dd:9e:95 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:test-preload-623381 Clientid:01:52:54:00:dd:9e:95}
	I0409 00:16:52.061444   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined IP address 192.168.39.104 and MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:52.061588   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHPort
	I0409 00:16:52.061764   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHKeyPath
	I0409 00:16:52.061904   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHUsername
	I0409 00:16:52.062050   52680 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/test-preload-623381/id_rsa Username:docker}
	I0409 00:16:52.146209   52680 ssh_runner.go:195] Run: cat /etc/os-release
	I0409 00:16:52.150223   52680 info.go:137] Remote host: Buildroot 2023.02.9
	I0409 00:16:52.150244   52680 filesync.go:126] Scanning /home/jenkins/minikube-integration/20501-9125/.minikube/addons for local assets ...
	I0409 00:16:52.150303   52680 filesync.go:126] Scanning /home/jenkins/minikube-integration/20501-9125/.minikube/files for local assets ...
	I0409 00:16:52.150379   52680 filesync.go:149] local asset: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem -> 163142.pem in /etc/ssl/certs
	I0409 00:16:52.150499   52680 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0409 00:16:52.159592   52680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem --> /etc/ssl/certs/163142.pem (1708 bytes)
	I0409 00:16:52.184288   52680 start.go:296] duration metric: took 126.868957ms for postStartSetup
	I0409 00:16:52.184328   52680 fix.go:56] duration metric: took 19.946769115s for fixHost
	I0409 00:16:52.184349   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHHostname
	I0409 00:16:52.187209   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:52.187588   52680 main.go:141] libmachine: (test-preload-623381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:9e:95", ip: ""} in network mk-test-preload-623381: {Iface:virbr1 ExpiryTime:2025-04-09 01:16:42 +0000 UTC Type:0 Mac:52:54:00:dd:9e:95 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:test-preload-623381 Clientid:01:52:54:00:dd:9e:95}
	I0409 00:16:52.187615   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined IP address 192.168.39.104 and MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:52.187729   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHPort
	I0409 00:16:52.187949   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHKeyPath
	I0409 00:16:52.188085   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHKeyPath
	I0409 00:16:52.188217   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHUsername
	I0409 00:16:52.188368   52680 main.go:141] libmachine: Using SSH client type: native
	I0409 00:16:52.188550   52680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0409 00:16:52.188560   52680 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0409 00:16:52.300267   52680 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744157812.277378356
	
	I0409 00:16:52.300290   52680 fix.go:216] guest clock: 1744157812.277378356
	I0409 00:16:52.300299   52680 fix.go:229] Guest: 2025-04-09 00:16:52.277378356 +0000 UTC Remote: 2025-04-09 00:16:52.184332031 +0000 UTC m=+32.843625785 (delta=93.046325ms)
	I0409 00:16:52.300343   52680 fix.go:200] guest clock delta is within tolerance: 93.046325ms
	I0409 00:16:52.300364   52680 start.go:83] releasing machines lock for "test-preload-623381", held for 20.0628105s
	I0409 00:16:52.300394   52680 main.go:141] libmachine: (test-preload-623381) Calling .DriverName
	I0409 00:16:52.300638   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetIP
	I0409 00:16:52.303267   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:52.303595   52680 main.go:141] libmachine: (test-preload-623381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:9e:95", ip: ""} in network mk-test-preload-623381: {Iface:virbr1 ExpiryTime:2025-04-09 01:16:42 +0000 UTC Type:0 Mac:52:54:00:dd:9e:95 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:test-preload-623381 Clientid:01:52:54:00:dd:9e:95}
	I0409 00:16:52.303621   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined IP address 192.168.39.104 and MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:52.303747   52680 main.go:141] libmachine: (test-preload-623381) Calling .DriverName
	I0409 00:16:52.304284   52680 main.go:141] libmachine: (test-preload-623381) Calling .DriverName
	I0409 00:16:52.304471   52680 main.go:141] libmachine: (test-preload-623381) Calling .DriverName
	I0409 00:16:52.304565   52680 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0409 00:16:52.304604   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHHostname
	I0409 00:16:52.304678   52680 ssh_runner.go:195] Run: cat /version.json
	I0409 00:16:52.304703   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHHostname
	I0409 00:16:52.307440   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:52.307516   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:52.307781   52680 main.go:141] libmachine: (test-preload-623381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:9e:95", ip: ""} in network mk-test-preload-623381: {Iface:virbr1 ExpiryTime:2025-04-09 01:16:42 +0000 UTC Type:0 Mac:52:54:00:dd:9e:95 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:test-preload-623381 Clientid:01:52:54:00:dd:9e:95}
	I0409 00:16:52.307811   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined IP address 192.168.39.104 and MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:52.307837   52680 main.go:141] libmachine: (test-preload-623381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:9e:95", ip: ""} in network mk-test-preload-623381: {Iface:virbr1 ExpiryTime:2025-04-09 01:16:42 +0000 UTC Type:0 Mac:52:54:00:dd:9e:95 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:test-preload-623381 Clientid:01:52:54:00:dd:9e:95}
	I0409 00:16:52.307854   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined IP address 192.168.39.104 and MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:52.308046   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHPort
	I0409 00:16:52.308131   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHPort
	I0409 00:16:52.308236   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHKeyPath
	I0409 00:16:52.308238   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHKeyPath
	I0409 00:16:52.308401   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHUsername
	I0409 00:16:52.308408   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHUsername
	I0409 00:16:52.308523   52680 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/test-preload-623381/id_rsa Username:docker}
	I0409 00:16:52.308557   52680 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/test-preload-623381/id_rsa Username:docker}
	I0409 00:16:52.422977   52680 ssh_runner.go:195] Run: systemctl --version
	I0409 00:16:52.428624   52680 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0409 00:16:52.566687   52680 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0409 00:16:52.572266   52680 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0409 00:16:52.572328   52680 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0409 00:16:52.586520   52680 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0409 00:16:52.586546   52680 start.go:495] detecting cgroup driver to use...
	I0409 00:16:52.586606   52680 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0409 00:16:52.600928   52680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0409 00:16:52.613625   52680 docker.go:217] disabling cri-docker service (if available) ...
	I0409 00:16:52.613674   52680 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0409 00:16:52.625889   52680 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0409 00:16:52.638764   52680 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0409 00:16:52.756317   52680 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0409 00:16:52.890548   52680 docker.go:233] disabling docker service ...
	I0409 00:16:52.890624   52680 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0409 00:16:52.904028   52680 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0409 00:16:52.916298   52680 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0409 00:16:53.045650   52680 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0409 00:16:53.162271   52680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0409 00:16:53.176825   52680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0409 00:16:53.193751   52680 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0409 00:16:53.193814   52680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0409 00:16:53.203285   52680 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0409 00:16:53.203333   52680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0409 00:16:53.212711   52680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0409 00:16:53.221881   52680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0409 00:16:53.231802   52680 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0409 00:16:53.242262   52680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0409 00:16:53.251761   52680 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0409 00:16:53.266759   52680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0409 00:16:53.276277   52680 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0409 00:16:53.284862   52680 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0409 00:16:53.284923   52680 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0409 00:16:53.308599   52680 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0409 00:16:53.317293   52680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 00:16:53.433916   52680 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0409 00:16:53.513949   52680 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0409 00:16:53.514019   52680 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0409 00:16:53.518208   52680 start.go:563] Will wait 60s for crictl version
	I0409 00:16:53.518254   52680 ssh_runner.go:195] Run: which crictl
	I0409 00:16:53.521691   52680 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0409 00:16:53.558696   52680 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0409 00:16:53.558771   52680 ssh_runner.go:195] Run: crio --version
	I0409 00:16:53.585500   52680 ssh_runner.go:195] Run: crio --version
	I0409 00:16:53.613147   52680 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0409 00:16:53.614517   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetIP
	I0409 00:16:53.617347   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:53.617781   52680 main.go:141] libmachine: (test-preload-623381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:9e:95", ip: ""} in network mk-test-preload-623381: {Iface:virbr1 ExpiryTime:2025-04-09 01:16:42 +0000 UTC Type:0 Mac:52:54:00:dd:9e:95 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:test-preload-623381 Clientid:01:52:54:00:dd:9e:95}
	I0409 00:16:53.617805   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined IP address 192.168.39.104 and MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:16:53.618029   52680 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0409 00:16:53.621875   52680 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0409 00:16:53.633808   52680 kubeadm.go:883] updating cluster {Name:test-preload-623381 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-prelo
ad-623381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0409 00:16:53.633917   52680 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0409 00:16:53.633985   52680 ssh_runner.go:195] Run: sudo crictl images --output json
	I0409 00:16:53.667324   52680 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0409 00:16:53.667397   52680 ssh_runner.go:195] Run: which lz4
	I0409 00:16:53.670925   52680 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0409 00:16:53.674655   52680 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0409 00:16:53.674686   52680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0409 00:16:54.996328   52680 crio.go:462] duration metric: took 1.325424505s to copy over tarball
	I0409 00:16:54.996398   52680 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0409 00:16:57.309209   52680 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.31277579s)
	I0409 00:16:57.309232   52680 crio.go:469] duration metric: took 2.312878117s to extract the tarball
	I0409 00:16:57.309239   52680 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0409 00:16:57.351338   52680 ssh_runner.go:195] Run: sudo crictl images --output json
	I0409 00:16:57.389762   52680 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0409 00:16:57.389787   52680 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0409 00:16:57.389838   52680 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0409 00:16:57.389861   52680 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0409 00:16:57.389891   52680 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0409 00:16:57.389924   52680 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0409 00:16:57.389970   52680 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0409 00:16:57.389963   52680 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0409 00:16:57.390001   52680 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0409 00:16:57.390038   52680 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0409 00:16:57.391209   52680 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0409 00:16:57.391220   52680 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0409 00:16:57.391224   52680 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0409 00:16:57.391248   52680 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0409 00:16:57.391253   52680 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0409 00:16:57.391259   52680 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0409 00:16:57.391264   52680 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0409 00:16:57.391279   52680 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0409 00:16:57.582602   52680 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0409 00:16:57.600718   52680 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0409 00:16:57.622505   52680 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0409 00:16:57.622548   52680 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0409 00:16:57.622597   52680 ssh_runner.go:195] Run: which crictl
	I0409 00:16:57.647899   52680 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0409 00:16:57.647940   52680 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0409 00:16:57.647942   52680 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0409 00:16:57.647968   52680 ssh_runner.go:195] Run: which crictl
	I0409 00:16:57.653393   52680 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0409 00:16:57.686963   52680 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0409 00:16:57.686980   52680 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0409 00:16:57.701397   52680 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0409 00:16:57.701454   52680 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0409 00:16:57.701506   52680 ssh_runner.go:195] Run: which crictl
	I0409 00:16:57.740523   52680 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0409 00:16:57.740852   52680 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0409 00:16:57.746209   52680 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0409 00:16:57.746733   52680 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0409 00:16:57.751427   52680 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0409 00:16:57.751448   52680 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0409 00:16:57.751489   52680 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0409 00:16:57.885302   52680 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0409 00:16:57.885342   52680 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0409 00:16:57.885389   52680 ssh_runner.go:195] Run: which crictl
	I0409 00:16:57.898190   52680 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0409 00:16:57.898234   52680 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0409 00:16:57.898283   52680 ssh_runner.go:195] Run: which crictl
	I0409 00:16:57.898308   52680 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0409 00:16:57.898341   52680 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0409 00:16:57.898383   52680 ssh_runner.go:195] Run: which crictl
	I0409 00:16:57.949210   52680 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0409 00:16:57.949263   52680 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0409 00:16:57.949303   52680 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0409 00:16:57.949331   52680 ssh_runner.go:195] Run: which crictl
	I0409 00:16:57.949363   52680 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0409 00:16:57.949393   52680 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20501-9125/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0409 00:16:57.949439   52680 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0409 00:16:57.949477   52680 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0409 00:16:57.949493   52680 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0409 00:16:57.949527   52680 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0409 00:16:58.045838   52680 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0409 00:16:58.045926   52680 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0409 00:16:58.045929   52680 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0409 00:16:58.045989   52680 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0409 00:16:58.045994   52680 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0409 00:16:58.045999   52680 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20501-9125/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0409 00:16:58.046021   52680 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0409 00:16:58.046080   52680 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0409 00:16:58.046082   52680 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0409 00:16:58.046124   52680 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0409 00:16:58.636321   52680 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0409 00:17:01.354589   52680 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4: (3.30854633s)
	I0409 00:17:01.354618   52680 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20501-9125/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0409 00:17:01.354635   52680 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6: (3.308768866s)
	I0409 00:17:01.354671   52680 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4: (3.308488705s)
	I0409 00:17:01.354715   52680 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0409 00:17:01.354726   52680 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0409 00:17:01.354741   52680 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4: (3.308600356s)
	I0409 00:17:01.354763   52680 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0409 00:17:01.354776   52680 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0409 00:17:01.354805   52680 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0409 00:17:01.354821   52680 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7: (3.30881034s)
	I0409 00:17:01.354865   52680 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20501-9125/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0409 00:17:01.354874   52680 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0: (3.308928863s)
	I0409 00:17:01.354915   52680 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4: (3.308817499s)
	I0409 00:17:01.354962   52680 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0409 00:17:01.354970   52680 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0409 00:17:01.354983   52680 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.718621129s)
	I0409 00:17:01.354917   52680 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0409 00:17:02.168356   52680 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20501-9125/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0409 00:17:02.168490   52680 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20501-9125/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0409 00:17:02.168561   52680 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20501-9125/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0409 00:17:02.168601   52680 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0409 00:17:02.168611   52680 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20501-9125/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0409 00:17:02.168647   52680 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0409 00:17:02.168661   52680 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0409 00:17:02.168670   52680 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0409 00:17:02.168675   52680 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0409 00:17:02.168696   52680 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0409 00:17:02.168755   52680 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0409 00:17:02.220006   52680 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20501-9125/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0409 00:17:02.220102   52680 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0409 00:17:02.336155   52680 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0409 00:17:02.336227   52680 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0409 00:17:02.336277   52680 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20501-9125/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0409 00:17:02.336308   52680 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0409 00:17:02.336332   52680 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0409 00:17:02.336360   52680 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0409 00:17:02.336362   52680 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0409 00:17:02.676795   52680 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20501-9125/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0409 00:17:02.676853   52680 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0409 00:17:02.676910   52680 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0409 00:17:04.822433   52680 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.14549872s)
	I0409 00:17:04.822464   52680 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20501-9125/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0409 00:17:04.822495   52680 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0409 00:17:04.822560   52680 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0409 00:17:05.272770   52680 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20501-9125/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0409 00:17:05.272820   52680 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0409 00:17:05.272892   52680 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0409 00:17:05.915403   52680 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20501-9125/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0409 00:17:05.915439   52680 cache_images.go:123] Successfully loaded all cached images
	I0409 00:17:05.915443   52680 cache_images.go:92] duration metric: took 8.525644506s to LoadCachedImages
	I0409 00:17:05.915456   52680 kubeadm.go:934] updating node { 192.168.39.104 8443 v1.24.4 crio true true} ...
	I0409 00:17:05.915556   52680 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-623381 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-623381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0409 00:17:05.915615   52680 ssh_runner.go:195] Run: crio config
	I0409 00:17:05.961697   52680 cni.go:84] Creating CNI manager for ""
	I0409 00:17:05.961717   52680 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0409 00:17:05.961726   52680 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0409 00:17:05.961742   52680 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.104 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-623381 NodeName:test-preload-623381 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0409 00:17:05.961852   52680 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-623381"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0409 00:17:05.961916   52680 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0409 00:17:05.971571   52680 binaries.go:44] Found k8s binaries, skipping transfer
	I0409 00:17:05.971635   52680 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0409 00:17:05.980393   52680 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0409 00:17:05.996897   52680 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0409 00:17:06.013457   52680 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0409 00:17:06.029934   52680 ssh_runner.go:195] Run: grep 192.168.39.104	control-plane.minikube.internal$ /etc/hosts
	I0409 00:17:06.033412   52680 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0409 00:17:06.044284   52680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 00:17:06.175390   52680 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0409 00:17:06.196030   52680 certs.go:68] Setting up /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/test-preload-623381 for IP: 192.168.39.104
	I0409 00:17:06.196055   52680 certs.go:194] generating shared ca certs ...
	I0409 00:17:06.196076   52680 certs.go:226] acquiring lock for ca certs: {Name:mk0d455aae85017ac942481bbc1202ccedea144f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:17:06.196266   52680 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.key
	I0409 00:17:06.196310   52680 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.key
	I0409 00:17:06.196328   52680 certs.go:256] generating profile certs ...
	I0409 00:17:06.196434   52680 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/test-preload-623381/client.key
	I0409 00:17:06.196567   52680 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/test-preload-623381/apiserver.key.a1c38b28
	I0409 00:17:06.196633   52680 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/test-preload-623381/proxy-client.key
	I0409 00:17:06.196774   52680 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314.pem (1338 bytes)
	W0409 00:17:06.196826   52680 certs.go:480] ignoring /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314_empty.pem, impossibly tiny 0 bytes
	I0409 00:17:06.196841   52680 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem (1679 bytes)
	I0409 00:17:06.196881   52680 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem (1082 bytes)
	I0409 00:17:06.196930   52680 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem (1123 bytes)
	I0409 00:17:06.196962   52680 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem (1675 bytes)
	I0409 00:17:06.197018   52680 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem (1708 bytes)
	I0409 00:17:06.197853   52680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0409 00:17:06.231571   52680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0409 00:17:06.278427   52680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0409 00:17:06.316974   52680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0409 00:17:06.341741   52680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/test-preload-623381/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0409 00:17:06.373807   52680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/test-preload-623381/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0409 00:17:06.404664   52680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/test-preload-623381/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0409 00:17:06.430333   52680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/test-preload-623381/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0409 00:17:06.451723   52680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem --> /usr/share/ca-certificates/163142.pem (1708 bytes)
	I0409 00:17:06.472773   52680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0409 00:17:06.493594   52680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314.pem --> /usr/share/ca-certificates/16314.pem (1338 bytes)
	I0409 00:17:06.514499   52680 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0409 00:17:06.530166   52680 ssh_runner.go:195] Run: openssl version
	I0409 00:17:06.535530   52680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0409 00:17:06.545533   52680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0409 00:17:06.549604   52680 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 22:46 /usr/share/ca-certificates/minikubeCA.pem
	I0409 00:17:06.549647   52680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0409 00:17:06.554995   52680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0409 00:17:06.564644   52680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16314.pem && ln -fs /usr/share/ca-certificates/16314.pem /etc/ssl/certs/16314.pem"
	I0409 00:17:06.573959   52680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16314.pem
	I0409 00:17:06.577737   52680 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 22:53 /usr/share/ca-certificates/16314.pem
	I0409 00:17:06.577780   52680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16314.pem
	I0409 00:17:06.582726   52680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16314.pem /etc/ssl/certs/51391683.0"
	I0409 00:17:06.592341   52680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163142.pem && ln -fs /usr/share/ca-certificates/163142.pem /etc/ssl/certs/163142.pem"
	I0409 00:17:06.601710   52680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163142.pem
	I0409 00:17:06.605640   52680 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 22:53 /usr/share/ca-certificates/163142.pem
	I0409 00:17:06.605697   52680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163142.pem
	I0409 00:17:06.610815   52680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163142.pem /etc/ssl/certs/3ec20f2e.0"
	I0409 00:17:06.620161   52680 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0409 00:17:06.624140   52680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0409 00:17:06.629433   52680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0409 00:17:06.634515   52680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0409 00:17:06.639818   52680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0409 00:17:06.644852   52680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0409 00:17:06.649807   52680 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0409 00:17:06.654798   52680 kubeadm.go:392] StartCluster: {Name:test-preload-623381 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-
623381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0409 00:17:06.654866   52680 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0409 00:17:06.654897   52680 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0409 00:17:06.693016   52680 cri.go:89] found id: ""
	I0409 00:17:06.693101   52680 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0409 00:17:06.702319   52680 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0409 00:17:06.702337   52680 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0409 00:17:06.702374   52680 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0409 00:17:06.711069   52680 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0409 00:17:06.711466   52680 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-623381" does not appear in /home/jenkins/minikube-integration/20501-9125/kubeconfig
	I0409 00:17:06.711602   52680 kubeconfig.go:62] /home/jenkins/minikube-integration/20501-9125/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-623381" cluster setting kubeconfig missing "test-preload-623381" context setting]
	I0409 00:17:06.712096   52680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20501-9125/kubeconfig: {Name:mk92c92b166b121ee2ee28c1b362d82cfe16b47a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:17:06.712594   52680 kapi.go:59] client config for test-preload-623381: &rest.Config{Host:"https://192.168.39.104:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20501-9125/.minikube/profiles/test-preload-623381/client.crt", KeyFile:"/home/jenkins/minikube-integration/20501-9125/.minikube/profiles/test-preload-623381/client.key", CAFile:"/home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24969e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0409 00:17:06.713015   52680 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0409 00:17:06.713034   52680 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0409 00:17:06.713038   52680 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0409 00:17:06.713042   52680 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0409 00:17:06.713351   52680 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0409 00:17:06.721791   52680 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.104
	I0409 00:17:06.721821   52680 kubeadm.go:1160] stopping kube-system containers ...
	I0409 00:17:06.721833   52680 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0409 00:17:06.721874   52680 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0409 00:17:06.758035   52680 cri.go:89] found id: ""
	I0409 00:17:06.758108   52680 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0409 00:17:06.773820   52680 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0409 00:17:06.783094   52680 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0409 00:17:06.783108   52680 kubeadm.go:157] found existing configuration files:
	
	I0409 00:17:06.783145   52680 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0409 00:17:06.791661   52680 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0409 00:17:06.791701   52680 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0409 00:17:06.800358   52680 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0409 00:17:06.808633   52680 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0409 00:17:06.808677   52680 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0409 00:17:06.817293   52680 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0409 00:17:06.825550   52680 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0409 00:17:06.825587   52680 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0409 00:17:06.834336   52680 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0409 00:17:06.842617   52680 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0409 00:17:06.842653   52680 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0409 00:17:06.851595   52680 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0409 00:17:06.860432   52680 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0409 00:17:06.949095   52680 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0409 00:17:07.844036   52680 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0409 00:17:08.098425   52680 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0409 00:17:08.161793   52680 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0409 00:17:08.225332   52680 api_server.go:52] waiting for apiserver process to appear ...
	I0409 00:17:08.225410   52680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0409 00:17:08.726442   52680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0409 00:17:09.225824   52680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0409 00:17:09.240440   52680 api_server.go:72] duration metric: took 1.015106829s to wait for apiserver process to appear ...
	I0409 00:17:09.240466   52680 api_server.go:88] waiting for apiserver healthz status ...
	I0409 00:17:09.240490   52680 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0409 00:17:09.240922   52680 api_server.go:269] stopped: https://192.168.39.104:8443/healthz: Get "https://192.168.39.104:8443/healthz": dial tcp 192.168.39.104:8443: connect: connection refused
	I0409 00:17:09.740783   52680 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0409 00:17:09.741309   52680 api_server.go:269] stopped: https://192.168.39.104:8443/healthz: Get "https://192.168.39.104:8443/healthz": dial tcp 192.168.39.104:8443: connect: connection refused
	I0409 00:17:10.240944   52680 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0409 00:17:13.031492   52680 api_server.go:279] https://192.168.39.104:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0409 00:17:13.031524   52680 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0409 00:17:13.031551   52680 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0409 00:17:13.071658   52680 api_server.go:279] https://192.168.39.104:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0409 00:17:13.071683   52680 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0409 00:17:13.241046   52680 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0409 00:17:13.247970   52680 api_server.go:279] https://192.168.39.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0409 00:17:13.247991   52680 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0409 00:17:13.740599   52680 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0409 00:17:13.747024   52680 api_server.go:279] https://192.168.39.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0409 00:17:13.747053   52680 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0409 00:17:14.240682   52680 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0409 00:17:14.249111   52680 api_server.go:279] https://192.168.39.104:8443/healthz returned 200:
	ok
	I0409 00:17:14.256401   52680 api_server.go:141] control plane version: v1.24.4
	I0409 00:17:14.256425   52680 api_server.go:131] duration metric: took 5.01595181s to wait for apiserver health ...
	I0409 00:17:14.256436   52680 cni.go:84] Creating CNI manager for ""
	I0409 00:17:14.256444   52680 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0409 00:17:14.257989   52680 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0409 00:17:14.258957   52680 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0409 00:17:14.277200   52680 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0409 00:17:14.308544   52680 system_pods.go:43] waiting for kube-system pods to appear ...
	I0409 00:17:14.312081   52680 system_pods.go:59] 7 kube-system pods found
	I0409 00:17:14.312109   52680 system_pods.go:61] "coredns-6d4b75cb6d-jmxk4" [4128230e-7b53-4e2f-af8c-fef871743abc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0409 00:17:14.312115   52680 system_pods.go:61] "etcd-test-preload-623381" [b26eb3c2-6e84-47c9-8d51-4c369e6b56fc] Running
	I0409 00:17:14.312124   52680 system_pods.go:61] "kube-apiserver-test-preload-623381" [26b71332-5451-4a2f-bea2-652ee24fa6c7] Running
	I0409 00:17:14.312132   52680 system_pods.go:61] "kube-controller-manager-test-preload-623381" [97f1ae7a-403a-40ea-a921-0cb84a8c16c6] Running
	I0409 00:17:14.312136   52680 system_pods.go:61] "kube-proxy-sk6cc" [bdcce1de-8a82-43ab-a379-e1ce978dc6a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0409 00:17:14.312151   52680 system_pods.go:61] "kube-scheduler-test-preload-623381" [1318b37a-35d5-479a-9340-e64ab363a529] Running
	I0409 00:17:14.312158   52680 system_pods.go:61] "storage-provisioner" [1cb1112b-5117-49e3-be71-3201c29071de] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0409 00:17:14.312169   52680 system_pods.go:74] duration metric: took 3.600867ms to wait for pod list to return data ...
	I0409 00:17:14.312181   52680 node_conditions.go:102] verifying NodePressure condition ...
	I0409 00:17:14.314483   52680 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0409 00:17:14.314504   52680 node_conditions.go:123] node cpu capacity is 2
	I0409 00:17:14.314514   52680 node_conditions.go:105] duration metric: took 2.329287ms to run NodePressure ...
	I0409 00:17:14.314529   52680 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0409 00:17:14.568449   52680 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0409 00:17:14.572053   52680 retry.go:31] will retry after 313.979632ms: kubelet not initialised
	I0409 00:17:14.891625   52680 retry.go:31] will retry after 384.597087ms: kubelet not initialised
	I0409 00:17:15.280833   52680 retry.go:31] will retry after 648.609364ms: kubelet not initialised
	I0409 00:17:15.933688   52680 retry.go:31] will retry after 1.216396742s: kubelet not initialised
	I0409 00:17:17.157015   52680 retry.go:31] will retry after 1.347087231s: kubelet not initialised
	I0409 00:17:18.510053   52680 kubeadm.go:739] kubelet initialised
	I0409 00:17:18.510073   52680 kubeadm.go:740] duration metric: took 3.941595276s waiting for restarted kubelet to initialise ...
	I0409 00:17:18.510080   52680 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0409 00:17:18.526885   52680 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-jmxk4" in "kube-system" namespace to be "Ready" ...
	I0409 00:17:18.535618   52680 pod_ready.go:98] node "test-preload-623381" hosting pod "coredns-6d4b75cb6d-jmxk4" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-623381" has status "Ready":"False"
	I0409 00:17:18.535638   52680 pod_ready.go:82] duration metric: took 8.719978ms for pod "coredns-6d4b75cb6d-jmxk4" in "kube-system" namespace to be "Ready" ...
	E0409 00:17:18.535646   52680 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-623381" hosting pod "coredns-6d4b75cb6d-jmxk4" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-623381" has status "Ready":"False"
	I0409 00:17:18.535653   52680 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-623381" in "kube-system" namespace to be "Ready" ...
	I0409 00:17:18.541595   52680 pod_ready.go:98] node "test-preload-623381" hosting pod "etcd-test-preload-623381" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-623381" has status "Ready":"False"
	I0409 00:17:18.541621   52680 pod_ready.go:82] duration metric: took 5.95743ms for pod "etcd-test-preload-623381" in "kube-system" namespace to be "Ready" ...
	E0409 00:17:18.541631   52680 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-623381" hosting pod "etcd-test-preload-623381" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-623381" has status "Ready":"False"
	I0409 00:17:18.541639   52680 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-623381" in "kube-system" namespace to be "Ready" ...
	I0409 00:17:18.550786   52680 pod_ready.go:98] node "test-preload-623381" hosting pod "kube-apiserver-test-preload-623381" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-623381" has status "Ready":"False"
	I0409 00:17:18.550810   52680 pod_ready.go:82] duration metric: took 9.160485ms for pod "kube-apiserver-test-preload-623381" in "kube-system" namespace to be "Ready" ...
	E0409 00:17:18.550822   52680 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-623381" hosting pod "kube-apiserver-test-preload-623381" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-623381" has status "Ready":"False"
	I0409 00:17:18.550831   52680 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-623381" in "kube-system" namespace to be "Ready" ...
	I0409 00:17:18.555582   52680 pod_ready.go:98] node "test-preload-623381" hosting pod "kube-controller-manager-test-preload-623381" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-623381" has status "Ready":"False"
	I0409 00:17:18.555606   52680 pod_ready.go:82] duration metric: took 4.761053ms for pod "kube-controller-manager-test-preload-623381" in "kube-system" namespace to be "Ready" ...
	E0409 00:17:18.555617   52680 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-623381" hosting pod "kube-controller-manager-test-preload-623381" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-623381" has status "Ready":"False"
	I0409 00:17:18.555626   52680 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-sk6cc" in "kube-system" namespace to be "Ready" ...
	I0409 00:17:18.908811   52680 pod_ready.go:98] node "test-preload-623381" hosting pod "kube-proxy-sk6cc" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-623381" has status "Ready":"False"
	I0409 00:17:18.908843   52680 pod_ready.go:82] duration metric: took 353.202681ms for pod "kube-proxy-sk6cc" in "kube-system" namespace to be "Ready" ...
	E0409 00:17:18.908856   52680 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-623381" hosting pod "kube-proxy-sk6cc" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-623381" has status "Ready":"False"
	I0409 00:17:18.908865   52680 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-623381" in "kube-system" namespace to be "Ready" ...
	I0409 00:17:19.308022   52680 pod_ready.go:98] node "test-preload-623381" hosting pod "kube-scheduler-test-preload-623381" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-623381" has status "Ready":"False"
	I0409 00:17:19.308051   52680 pod_ready.go:82] duration metric: took 399.176884ms for pod "kube-scheduler-test-preload-623381" in "kube-system" namespace to be "Ready" ...
	E0409 00:17:19.308063   52680 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-623381" hosting pod "kube-scheduler-test-preload-623381" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-623381" has status "Ready":"False"
	I0409 00:17:19.308073   52680 pod_ready.go:39] duration metric: took 797.981297ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0409 00:17:19.308095   52680 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0409 00:17:19.319853   52680 ops.go:34] apiserver oom_adj: -16
	I0409 00:17:19.319892   52680 kubeadm.go:597] duration metric: took 12.617549402s to restartPrimaryControlPlane
	I0409 00:17:19.319901   52680 kubeadm.go:394] duration metric: took 12.665107621s to StartCluster
	I0409 00:17:19.319919   52680 settings.go:142] acquiring lock: {Name:mk362ccb6fac1c71fdd578f798171322d97c1c2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:17:19.319995   52680 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20501-9125/kubeconfig
	I0409 00:17:19.320611   52680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20501-9125/kubeconfig: {Name:mk92c92b166b121ee2ee28c1b362d82cfe16b47a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:17:19.320839   52680 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0409 00:17:19.320895   52680 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0409 00:17:19.320979   52680 addons.go:69] Setting storage-provisioner=true in profile "test-preload-623381"
	I0409 00:17:19.320994   52680 addons.go:69] Setting default-storageclass=true in profile "test-preload-623381"
	I0409 00:17:19.321004   52680 addons.go:238] Setting addon storage-provisioner=true in "test-preload-623381"
	W0409 00:17:19.321013   52680 addons.go:247] addon storage-provisioner should already be in state true
	I0409 00:17:19.321014   52680 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-623381"
	I0409 00:17:19.321037   52680 host.go:66] Checking if "test-preload-623381" exists ...
	I0409 00:17:19.321054   52680 config.go:182] Loaded profile config "test-preload-623381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0409 00:17:19.321405   52680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0409 00:17:19.321427   52680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0409 00:17:19.321453   52680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0409 00:17:19.321539   52680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0409 00:17:19.322359   52680 out.go:177] * Verifying Kubernetes components...
	I0409 00:17:19.323389   52680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 00:17:19.336469   52680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46359
	I0409 00:17:19.336936   52680 main.go:141] libmachine: () Calling .GetVersion
	I0409 00:17:19.337393   52680 main.go:141] libmachine: Using API Version  1
	I0409 00:17:19.337416   52680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0409 00:17:19.337780   52680 main.go:141] libmachine: () Calling .GetMachineName
	I0409 00:17:19.338285   52680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0409 00:17:19.338325   52680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0409 00:17:19.340959   52680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41523
	I0409 00:17:19.341428   52680 main.go:141] libmachine: () Calling .GetVersion
	I0409 00:17:19.341848   52680 main.go:141] libmachine: Using API Version  1
	I0409 00:17:19.341866   52680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0409 00:17:19.342266   52680 main.go:141] libmachine: () Calling .GetMachineName
	I0409 00:17:19.342456   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetState
	I0409 00:17:19.344606   52680 kapi.go:59] client config for test-preload-623381: &rest.Config{Host:"https://192.168.39.104:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20501-9125/.minikube/profiles/test-preload-623381/client.crt", KeyFile:"/home/jenkins/minikube-integration/20501-9125/.minikube/profiles/test-preload-623381/client.key", CAFile:"/home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24969e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0409 00:17:19.344880   52680 addons.go:238] Setting addon default-storageclass=true in "test-preload-623381"
	W0409 00:17:19.344900   52680 addons.go:247] addon default-storageclass should already be in state true
	I0409 00:17:19.344935   52680 host.go:66] Checking if "test-preload-623381" exists ...
	I0409 00:17:19.345198   52680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0409 00:17:19.345240   52680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0409 00:17:19.354447   52680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42641
	I0409 00:17:19.354871   52680 main.go:141] libmachine: () Calling .GetVersion
	I0409 00:17:19.355349   52680 main.go:141] libmachine: Using API Version  1
	I0409 00:17:19.355377   52680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0409 00:17:19.355735   52680 main.go:141] libmachine: () Calling .GetMachineName
	I0409 00:17:19.355934   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetState
	I0409 00:17:19.357486   52680 main.go:141] libmachine: (test-preload-623381) Calling .DriverName
	I0409 00:17:19.359361   52680 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0409 00:17:19.360725   52680 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0409 00:17:19.360744   52680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0409 00:17:19.360762   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHHostname
	I0409 00:17:19.361381   52680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40285
	I0409 00:17:19.361743   52680 main.go:141] libmachine: () Calling .GetVersion
	I0409 00:17:19.362257   52680 main.go:141] libmachine: Using API Version  1
	I0409 00:17:19.362280   52680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0409 00:17:19.362653   52680 main.go:141] libmachine: () Calling .GetMachineName
	I0409 00:17:19.363246   52680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0409 00:17:19.363300   52680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0409 00:17:19.364090   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:17:19.364475   52680 main.go:141] libmachine: (test-preload-623381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:9e:95", ip: ""} in network mk-test-preload-623381: {Iface:virbr1 ExpiryTime:2025-04-09 01:16:42 +0000 UTC Type:0 Mac:52:54:00:dd:9e:95 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:test-preload-623381 Clientid:01:52:54:00:dd:9e:95}
	I0409 00:17:19.364498   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined IP address 192.168.39.104 and MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:17:19.364682   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHPort
	I0409 00:17:19.364835   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHKeyPath
	I0409 00:17:19.365075   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHUsername
	I0409 00:17:19.365262   52680 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/test-preload-623381/id_rsa Username:docker}
	I0409 00:17:19.411652   52680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46603
	I0409 00:17:19.412151   52680 main.go:141] libmachine: () Calling .GetVersion
	I0409 00:17:19.412594   52680 main.go:141] libmachine: Using API Version  1
	I0409 00:17:19.412619   52680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0409 00:17:19.413001   52680 main.go:141] libmachine: () Calling .GetMachineName
	I0409 00:17:19.413209   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetState
	I0409 00:17:19.414789   52680 main.go:141] libmachine: (test-preload-623381) Calling .DriverName
	I0409 00:17:19.415003   52680 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0409 00:17:19.415019   52680 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0409 00:17:19.415039   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHHostname
	I0409 00:17:19.417794   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:17:19.418258   52680 main.go:141] libmachine: (test-preload-623381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:9e:95", ip: ""} in network mk-test-preload-623381: {Iface:virbr1 ExpiryTime:2025-04-09 01:16:42 +0000 UTC Type:0 Mac:52:54:00:dd:9e:95 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:test-preload-623381 Clientid:01:52:54:00:dd:9e:95}
	I0409 00:17:19.418289   52680 main.go:141] libmachine: (test-preload-623381) DBG | domain test-preload-623381 has defined IP address 192.168.39.104 and MAC address 52:54:00:dd:9e:95 in network mk-test-preload-623381
	I0409 00:17:19.418433   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHPort
	I0409 00:17:19.418619   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHKeyPath
	I0409 00:17:19.418769   52680 main.go:141] libmachine: (test-preload-623381) Calling .GetSSHUsername
	I0409 00:17:19.418903   52680 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/test-preload-623381/id_rsa Username:docker}
	I0409 00:17:19.513057   52680 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0409 00:17:19.529075   52680 node_ready.go:35] waiting up to 6m0s for node "test-preload-623381" to be "Ready" ...
	I0409 00:17:19.637989   52680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0409 00:17:19.656366   52680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0409 00:17:20.530432   52680 main.go:141] libmachine: Making call to close driver server
	I0409 00:17:20.530470   52680 main.go:141] libmachine: (test-preload-623381) Calling .Close
	I0409 00:17:20.530495   52680 main.go:141] libmachine: Making call to close driver server
	I0409 00:17:20.530513   52680 main.go:141] libmachine: (test-preload-623381) Calling .Close
	I0409 00:17:20.530748   52680 main.go:141] libmachine: Successfully made call to close driver server
	I0409 00:17:20.530764   52680 main.go:141] libmachine: Making call to close connection to plugin binary
	I0409 00:17:20.530772   52680 main.go:141] libmachine: Making call to close driver server
	I0409 00:17:20.530779   52680 main.go:141] libmachine: (test-preload-623381) Calling .Close
	I0409 00:17:20.530814   52680 main.go:141] libmachine: (test-preload-623381) DBG | Closing plugin on server side
	I0409 00:17:20.530831   52680 main.go:141] libmachine: Successfully made call to close driver server
	I0409 00:17:20.530844   52680 main.go:141] libmachine: Making call to close connection to plugin binary
	I0409 00:17:20.530860   52680 main.go:141] libmachine: Making call to close driver server
	I0409 00:17:20.530868   52680 main.go:141] libmachine: (test-preload-623381) Calling .Close
	I0409 00:17:20.530966   52680 main.go:141] libmachine: Successfully made call to close driver server
	I0409 00:17:20.530989   52680 main.go:141] libmachine: Making call to close connection to plugin binary
	I0409 00:17:20.531221   52680 main.go:141] libmachine: (test-preload-623381) DBG | Closing plugin on server side
	I0409 00:17:20.531252   52680 main.go:141] libmachine: Successfully made call to close driver server
	I0409 00:17:20.531272   52680 main.go:141] libmachine: Making call to close connection to plugin binary
	I0409 00:17:20.537131   52680 main.go:141] libmachine: Making call to close driver server
	I0409 00:17:20.537147   52680 main.go:141] libmachine: (test-preload-623381) Calling .Close
	I0409 00:17:20.537326   52680 main.go:141] libmachine: Successfully made call to close driver server
	I0409 00:17:20.537340   52680 main.go:141] libmachine: Making call to close connection to plugin binary
	I0409 00:17:20.537350   52680 main.go:141] libmachine: (test-preload-623381) DBG | Closing plugin on server side
	I0409 00:17:20.539200   52680 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0409 00:17:20.540345   52680 addons.go:514] duration metric: took 1.219457877s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0409 00:17:21.532611   52680 node_ready.go:53] node "test-preload-623381" has status "Ready":"False"
	I0409 00:17:23.531935   52680 node_ready.go:49] node "test-preload-623381" has status "Ready":"True"
	I0409 00:17:23.531963   52680 node_ready.go:38] duration metric: took 4.002856577s for node "test-preload-623381" to be "Ready" ...
	I0409 00:17:23.531974   52680 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0409 00:17:23.535292   52680 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-jmxk4" in "kube-system" namespace to be "Ready" ...
	I0409 00:17:23.538869   52680 pod_ready.go:93] pod "coredns-6d4b75cb6d-jmxk4" in "kube-system" namespace has status "Ready":"True"
	I0409 00:17:23.538891   52680 pod_ready.go:82] duration metric: took 3.571999ms for pod "coredns-6d4b75cb6d-jmxk4" in "kube-system" namespace to be "Ready" ...
	I0409 00:17:23.538905   52680 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-623381" in "kube-system" namespace to be "Ready" ...
	I0409 00:17:25.044140   52680 pod_ready.go:93] pod "etcd-test-preload-623381" in "kube-system" namespace has status "Ready":"True"
	I0409 00:17:25.044165   52680 pod_ready.go:82] duration metric: took 1.505253216s for pod "etcd-test-preload-623381" in "kube-system" namespace to be "Ready" ...
	I0409 00:17:25.044174   52680 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-623381" in "kube-system" namespace to be "Ready" ...
	I0409 00:17:25.047682   52680 pod_ready.go:93] pod "kube-apiserver-test-preload-623381" in "kube-system" namespace has status "Ready":"True"
	I0409 00:17:25.047698   52680 pod_ready.go:82] duration metric: took 3.51801ms for pod "kube-apiserver-test-preload-623381" in "kube-system" namespace to be "Ready" ...
	I0409 00:17:25.047706   52680 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-623381" in "kube-system" namespace to be "Ready" ...
	I0409 00:17:25.050968   52680 pod_ready.go:93] pod "kube-controller-manager-test-preload-623381" in "kube-system" namespace has status "Ready":"True"
	I0409 00:17:25.050983   52680 pod_ready.go:82] duration metric: took 3.271832ms for pod "kube-controller-manager-test-preload-623381" in "kube-system" namespace to be "Ready" ...
	I0409 00:17:25.050991   52680 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sk6cc" in "kube-system" namespace to be "Ready" ...
	I0409 00:17:25.132032   52680 pod_ready.go:93] pod "kube-proxy-sk6cc" in "kube-system" namespace has status "Ready":"True"
	I0409 00:17:25.132051   52680 pod_ready.go:82] duration metric: took 81.054757ms for pod "kube-proxy-sk6cc" in "kube-system" namespace to be "Ready" ...
	I0409 00:17:25.132060   52680 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-623381" in "kube-system" namespace to be "Ready" ...
	I0409 00:17:25.533225   52680 pod_ready.go:93] pod "kube-scheduler-test-preload-623381" in "kube-system" namespace has status "Ready":"True"
	I0409 00:17:25.533251   52680 pod_ready.go:82] duration metric: took 401.184782ms for pod "kube-scheduler-test-preload-623381" in "kube-system" namespace to be "Ready" ...
	I0409 00:17:25.533260   52680 pod_ready.go:39] duration metric: took 2.001273228s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0409 00:17:25.533275   52680 api_server.go:52] waiting for apiserver process to appear ...
	I0409 00:17:25.533319   52680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0409 00:17:25.549140   52680 api_server.go:72] duration metric: took 6.228273394s to wait for apiserver process to appear ...
	I0409 00:17:25.549161   52680 api_server.go:88] waiting for apiserver healthz status ...
	I0409 00:17:25.549175   52680 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0409 00:17:25.553945   52680 api_server.go:279] https://192.168.39.104:8443/healthz returned 200:
	ok
	I0409 00:17:25.554759   52680 api_server.go:141] control plane version: v1.24.4
	I0409 00:17:25.554774   52680 api_server.go:131] duration metric: took 5.608075ms to wait for apiserver health ...
	I0409 00:17:25.554781   52680 system_pods.go:43] waiting for kube-system pods to appear ...
	I0409 00:17:25.735081   52680 system_pods.go:59] 7 kube-system pods found
	I0409 00:17:25.735110   52680 system_pods.go:61] "coredns-6d4b75cb6d-jmxk4" [4128230e-7b53-4e2f-af8c-fef871743abc] Running
	I0409 00:17:25.735117   52680 system_pods.go:61] "etcd-test-preload-623381" [b26eb3c2-6e84-47c9-8d51-4c369e6b56fc] Running
	I0409 00:17:25.735122   52680 system_pods.go:61] "kube-apiserver-test-preload-623381" [26b71332-5451-4a2f-bea2-652ee24fa6c7] Running
	I0409 00:17:25.735128   52680 system_pods.go:61] "kube-controller-manager-test-preload-623381" [97f1ae7a-403a-40ea-a921-0cb84a8c16c6] Running
	I0409 00:17:25.735133   52680 system_pods.go:61] "kube-proxy-sk6cc" [bdcce1de-8a82-43ab-a379-e1ce978dc6a9] Running
	I0409 00:17:25.735137   52680 system_pods.go:61] "kube-scheduler-test-preload-623381" [1318b37a-35d5-479a-9340-e64ab363a529] Running
	I0409 00:17:25.735142   52680 system_pods.go:61] "storage-provisioner" [1cb1112b-5117-49e3-be71-3201c29071de] Running
	I0409 00:17:25.735164   52680 system_pods.go:74] duration metric: took 180.376336ms to wait for pod list to return data ...
	I0409 00:17:25.735174   52680 default_sa.go:34] waiting for default service account to be created ...
	I0409 00:17:25.931923   52680 default_sa.go:45] found service account: "default"
	I0409 00:17:25.931956   52680 default_sa.go:55] duration metric: took 196.774822ms for default service account to be created ...
	I0409 00:17:25.931967   52680 system_pods.go:116] waiting for k8s-apps to be running ...
	I0409 00:17:26.134112   52680 system_pods.go:86] 7 kube-system pods found
	I0409 00:17:26.134140   52680 system_pods.go:89] "coredns-6d4b75cb6d-jmxk4" [4128230e-7b53-4e2f-af8c-fef871743abc] Running
	I0409 00:17:26.134146   52680 system_pods.go:89] "etcd-test-preload-623381" [b26eb3c2-6e84-47c9-8d51-4c369e6b56fc] Running
	I0409 00:17:26.134149   52680 system_pods.go:89] "kube-apiserver-test-preload-623381" [26b71332-5451-4a2f-bea2-652ee24fa6c7] Running
	I0409 00:17:26.134153   52680 system_pods.go:89] "kube-controller-manager-test-preload-623381" [97f1ae7a-403a-40ea-a921-0cb84a8c16c6] Running
	I0409 00:17:26.134156   52680 system_pods.go:89] "kube-proxy-sk6cc" [bdcce1de-8a82-43ab-a379-e1ce978dc6a9] Running
	I0409 00:17:26.134168   52680 system_pods.go:89] "kube-scheduler-test-preload-623381" [1318b37a-35d5-479a-9340-e64ab363a529] Running
	I0409 00:17:26.134175   52680 system_pods.go:89] "storage-provisioner" [1cb1112b-5117-49e3-be71-3201c29071de] Running
	I0409 00:17:26.134182   52680 system_pods.go:126] duration metric: took 202.207921ms to wait for k8s-apps to be running ...
	I0409 00:17:26.134188   52680 system_svc.go:44] waiting for kubelet service to be running ....
	I0409 00:17:26.134229   52680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0409 00:17:26.147584   52680 system_svc.go:56] duration metric: took 13.389023ms WaitForService to wait for kubelet
	I0409 00:17:26.147607   52680 kubeadm.go:582] duration metric: took 6.826743484s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0409 00:17:26.147644   52680 node_conditions.go:102] verifying NodePressure condition ...
	I0409 00:17:26.334952   52680 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0409 00:17:26.334977   52680 node_conditions.go:123] node cpu capacity is 2
	I0409 00:17:26.334987   52680 node_conditions.go:105] duration metric: took 187.336703ms to run NodePressure ...
	I0409 00:17:26.334998   52680 start.go:241] waiting for startup goroutines ...
	I0409 00:17:26.335004   52680 start.go:246] waiting for cluster config update ...
	I0409 00:17:26.335015   52680 start.go:255] writing updated cluster config ...
	I0409 00:17:26.335294   52680 ssh_runner.go:195] Run: rm -f paused
	I0409 00:17:26.382239   52680 start.go:600] kubectl: 1.32.3, cluster: 1.24.4 (minor skew: 8)
	I0409 00:17:26.384246   52680 out.go:201] 
	W0409 00:17:26.385444   52680 out.go:270] ! /usr/local/bin/kubectl is version 1.32.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0409 00:17:26.386549   52680 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0409 00:17:26.387715   52680 out.go:177] * Done! kubectl is now configured to use "test-preload-623381" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 09 00:17:27 test-preload-623381 crio[673]: time="2025-04-09 00:17:27.260319248Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744157847260297802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6844b3d0-4d22-4a5b-8d98-46773a9760c5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 09 00:17:27 test-preload-623381 crio[673]: time="2025-04-09 00:17:27.260872155Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1380a1ce-1262-488f-896f-cd10d60a1f22 name=/runtime.v1.RuntimeService/ListContainers
	Apr 09 00:17:27 test-preload-623381 crio[673]: time="2025-04-09 00:17:27.260980438Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1380a1ce-1262-488f-896f-cd10d60a1f22 name=/runtime.v1.RuntimeService/ListContainers
	Apr 09 00:17:27 test-preload-623381 crio[673]: time="2025-04-09 00:17:27.261183003Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6ff94771a202c0fac7abdcbddbad8952e27f569e1c4cc60fa2e2ac6b1864223d,PodSandboxId:862ad0d7659388c1decfc0de4d827a8e180e75fb28e0f0056e136deca5511f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744157841325375349,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-jmxk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4128230e-7b53-4e2f-af8c-fef871743abc,},Annotations:map[string]string{io.kubernetes.container.hash: d5db9a1f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:735f9646e501bcbf32db9a88d9889af067ca9a863297e85ef0852279e866e074,PodSandboxId:69da9fe989feb262014dbed062c99c01ddb6f0f49330275c879b8da6a6151803,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744157834239012709,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 1cb1112b-5117-49e3-be71-3201c29071de,},Annotations:map[string]string{io.kubernetes.container.hash: 70b7ef2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e1a3acbd8088dc7c38380aa5a7b56ca25de0ac735007a31fa196144511f1039,PodSandboxId:51d470335727ec034a20461f0a063a3c3dfc30a63b9d8cc489e700798885a321,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744157833948360632,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sk6cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd
cce1de-8a82-43ab-a379-e1ce978dc6a9,},Annotations:map[string]string{io.kubernetes.container.hash: f85596ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1323c79f110d247a8e39500b239d1add627d90ca4edcb90a49671021ba8d4f3,PodSandboxId:d05ac3dcdd3fb450b8a5aaf2de730ab9b4dbb8ca795ff4f0e09d873c804b3b5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744157828964035109,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-623381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 605077489
eff7e8da7002295847181bf,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d143fa8864ce14d86a1ef9f8e03ca8225070eba6f23e09d2b150e7779f157ed7,PodSandboxId:613c7120ce8827aecdae334ab66d0b9a5028ffaec73af8d28d3415b8cc7d43ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744157828926322872,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-623381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 7119ebef5d04951c82ea0723930c5a79,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911f92f7691a350d3c7ab8ed3b48bcfc1f97cbe5fb1b3b3d7b9261c98125a873,PodSandboxId:c6722205c3aeea69868d7ee6b602fc0e86b65fad23cee02773ccfedaddcbb1b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744157828909949666,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-623381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5136
e640834f4fc56eb5c3be3996efda,},Annotations:map[string]string{io.kubernetes.container.hash: da9eb463,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:944e9531d0b2b5f5056074d48d5f71d72ee6e108f0fafaaad49ad7c21f5a5ebd,PodSandboxId:6bf06cd8c4c03675e7195228c9994d90144a73083806515a9fef572a425ba500,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744157828893033322,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-623381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21890771f4909f60b5ad0427076b90f,},Annotation
s:map[string]string{io.kubernetes.container.hash: 78d9aee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1380a1ce-1262-488f-896f-cd10d60a1f22 name=/runtime.v1.RuntimeService/ListContainers
	Apr 09 00:17:27 test-preload-623381 crio[673]: time="2025-04-09 00:17:27.296117394Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c18215d8-03df-4ecc-8592-5abedaebd392 name=/runtime.v1.RuntimeService/Version
	Apr 09 00:17:27 test-preload-623381 crio[673]: time="2025-04-09 00:17:27.296204151Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c18215d8-03df-4ecc-8592-5abedaebd392 name=/runtime.v1.RuntimeService/Version
	Apr 09 00:17:27 test-preload-623381 crio[673]: time="2025-04-09 00:17:27.297237615Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d98f4996-c91a-4866-a538-79a0b1c201a9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 09 00:17:27 test-preload-623381 crio[673]: time="2025-04-09 00:17:27.297763800Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744157847297739950,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d98f4996-c91a-4866-a538-79a0b1c201a9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 09 00:17:27 test-preload-623381 crio[673]: time="2025-04-09 00:17:27.298299957Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94c9bb56-0866-456b-87b8-cb15fed71af6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 09 00:17:27 test-preload-623381 crio[673]: time="2025-04-09 00:17:27.298361413Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94c9bb56-0866-456b-87b8-cb15fed71af6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 09 00:17:27 test-preload-623381 crio[673]: time="2025-04-09 00:17:27.298532066Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6ff94771a202c0fac7abdcbddbad8952e27f569e1c4cc60fa2e2ac6b1864223d,PodSandboxId:862ad0d7659388c1decfc0de4d827a8e180e75fb28e0f0056e136deca5511f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744157841325375349,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-jmxk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4128230e-7b53-4e2f-af8c-fef871743abc,},Annotations:map[string]string{io.kubernetes.container.hash: d5db9a1f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:735f9646e501bcbf32db9a88d9889af067ca9a863297e85ef0852279e866e074,PodSandboxId:69da9fe989feb262014dbed062c99c01ddb6f0f49330275c879b8da6a6151803,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744157834239012709,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 1cb1112b-5117-49e3-be71-3201c29071de,},Annotations:map[string]string{io.kubernetes.container.hash: 70b7ef2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e1a3acbd8088dc7c38380aa5a7b56ca25de0ac735007a31fa196144511f1039,PodSandboxId:51d470335727ec034a20461f0a063a3c3dfc30a63b9d8cc489e700798885a321,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744157833948360632,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sk6cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd
cce1de-8a82-43ab-a379-e1ce978dc6a9,},Annotations:map[string]string{io.kubernetes.container.hash: f85596ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1323c79f110d247a8e39500b239d1add627d90ca4edcb90a49671021ba8d4f3,PodSandboxId:d05ac3dcdd3fb450b8a5aaf2de730ab9b4dbb8ca795ff4f0e09d873c804b3b5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744157828964035109,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-623381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 605077489
eff7e8da7002295847181bf,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d143fa8864ce14d86a1ef9f8e03ca8225070eba6f23e09d2b150e7779f157ed7,PodSandboxId:613c7120ce8827aecdae334ab66d0b9a5028ffaec73af8d28d3415b8cc7d43ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744157828926322872,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-623381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 7119ebef5d04951c82ea0723930c5a79,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911f92f7691a350d3c7ab8ed3b48bcfc1f97cbe5fb1b3b3d7b9261c98125a873,PodSandboxId:c6722205c3aeea69868d7ee6b602fc0e86b65fad23cee02773ccfedaddcbb1b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744157828909949666,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-623381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5136
e640834f4fc56eb5c3be3996efda,},Annotations:map[string]string{io.kubernetes.container.hash: da9eb463,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:944e9531d0b2b5f5056074d48d5f71d72ee6e108f0fafaaad49ad7c21f5a5ebd,PodSandboxId:6bf06cd8c4c03675e7195228c9994d90144a73083806515a9fef572a425ba500,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744157828893033322,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-623381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21890771f4909f60b5ad0427076b90f,},Annotation
s:map[string]string{io.kubernetes.container.hash: 78d9aee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=94c9bb56-0866-456b-87b8-cb15fed71af6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 09 00:17:27 test-preload-623381 crio[673]: time="2025-04-09 00:17:27.334339496Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2444aa15-a616-4e5b-aae6-5a9957bb5bb8 name=/runtime.v1.RuntimeService/Version
	Apr 09 00:17:27 test-preload-623381 crio[673]: time="2025-04-09 00:17:27.334411884Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2444aa15-a616-4e5b-aae6-5a9957bb5bb8 name=/runtime.v1.RuntimeService/Version
	Apr 09 00:17:27 test-preload-623381 crio[673]: time="2025-04-09 00:17:27.335390963Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=082137fb-7a5f-48a4-9c03-205e1acd0798 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 09 00:17:27 test-preload-623381 crio[673]: time="2025-04-09 00:17:27.335878226Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744157847335855959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=082137fb-7a5f-48a4-9c03-205e1acd0798 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 09 00:17:27 test-preload-623381 crio[673]: time="2025-04-09 00:17:27.336353815Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f886a696-1c15-4c72-bb09-7e749b45fd30 name=/runtime.v1.RuntimeService/ListContainers
	Apr 09 00:17:27 test-preload-623381 crio[673]: time="2025-04-09 00:17:27.336420032Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f886a696-1c15-4c72-bb09-7e749b45fd30 name=/runtime.v1.RuntimeService/ListContainers
	Apr 09 00:17:27 test-preload-623381 crio[673]: time="2025-04-09 00:17:27.336722816Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6ff94771a202c0fac7abdcbddbad8952e27f569e1c4cc60fa2e2ac6b1864223d,PodSandboxId:862ad0d7659388c1decfc0de4d827a8e180e75fb28e0f0056e136deca5511f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744157841325375349,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-jmxk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4128230e-7b53-4e2f-af8c-fef871743abc,},Annotations:map[string]string{io.kubernetes.container.hash: d5db9a1f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:735f9646e501bcbf32db9a88d9889af067ca9a863297e85ef0852279e866e074,PodSandboxId:69da9fe989feb262014dbed062c99c01ddb6f0f49330275c879b8da6a6151803,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744157834239012709,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 1cb1112b-5117-49e3-be71-3201c29071de,},Annotations:map[string]string{io.kubernetes.container.hash: 70b7ef2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e1a3acbd8088dc7c38380aa5a7b56ca25de0ac735007a31fa196144511f1039,PodSandboxId:51d470335727ec034a20461f0a063a3c3dfc30a63b9d8cc489e700798885a321,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744157833948360632,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sk6cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd
cce1de-8a82-43ab-a379-e1ce978dc6a9,},Annotations:map[string]string{io.kubernetes.container.hash: f85596ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1323c79f110d247a8e39500b239d1add627d90ca4edcb90a49671021ba8d4f3,PodSandboxId:d05ac3dcdd3fb450b8a5aaf2de730ab9b4dbb8ca795ff4f0e09d873c804b3b5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744157828964035109,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-623381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 605077489
eff7e8da7002295847181bf,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d143fa8864ce14d86a1ef9f8e03ca8225070eba6f23e09d2b150e7779f157ed7,PodSandboxId:613c7120ce8827aecdae334ab66d0b9a5028ffaec73af8d28d3415b8cc7d43ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744157828926322872,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-623381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 7119ebef5d04951c82ea0723930c5a79,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911f92f7691a350d3c7ab8ed3b48bcfc1f97cbe5fb1b3b3d7b9261c98125a873,PodSandboxId:c6722205c3aeea69868d7ee6b602fc0e86b65fad23cee02773ccfedaddcbb1b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744157828909949666,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-623381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5136
e640834f4fc56eb5c3be3996efda,},Annotations:map[string]string{io.kubernetes.container.hash: da9eb463,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:944e9531d0b2b5f5056074d48d5f71d72ee6e108f0fafaaad49ad7c21f5a5ebd,PodSandboxId:6bf06cd8c4c03675e7195228c9994d90144a73083806515a9fef572a425ba500,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744157828893033322,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-623381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21890771f4909f60b5ad0427076b90f,},Annotation
s:map[string]string{io.kubernetes.container.hash: 78d9aee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f886a696-1c15-4c72-bb09-7e749b45fd30 name=/runtime.v1.RuntimeService/ListContainers
	Apr 09 00:17:27 test-preload-623381 crio[673]: time="2025-04-09 00:17:27.367693140Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9299ca3a-779a-498e-ae89-46da513b95c6 name=/runtime.v1.RuntimeService/Version
	Apr 09 00:17:27 test-preload-623381 crio[673]: time="2025-04-09 00:17:27.367768102Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9299ca3a-779a-498e-ae89-46da513b95c6 name=/runtime.v1.RuntimeService/Version
	Apr 09 00:17:27 test-preload-623381 crio[673]: time="2025-04-09 00:17:27.369263006Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e2c1240f-729d-4600-b252-906ac31956e7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 09 00:17:27 test-preload-623381 crio[673]: time="2025-04-09 00:17:27.369884902Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744157847369859850,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e2c1240f-729d-4600-b252-906ac31956e7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 09 00:17:27 test-preload-623381 crio[673]: time="2025-04-09 00:17:27.370657222Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7479036-8d53-4342-b79e-624dd7a4e159 name=/runtime.v1.RuntimeService/ListContainers
	Apr 09 00:17:27 test-preload-623381 crio[673]: time="2025-04-09 00:17:27.370760986Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7479036-8d53-4342-b79e-624dd7a4e159 name=/runtime.v1.RuntimeService/ListContainers
	Apr 09 00:17:27 test-preload-623381 crio[673]: time="2025-04-09 00:17:27.370937049Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6ff94771a202c0fac7abdcbddbad8952e27f569e1c4cc60fa2e2ac6b1864223d,PodSandboxId:862ad0d7659388c1decfc0de4d827a8e180e75fb28e0f0056e136deca5511f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744157841325375349,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-jmxk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4128230e-7b53-4e2f-af8c-fef871743abc,},Annotations:map[string]string{io.kubernetes.container.hash: d5db9a1f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:735f9646e501bcbf32db9a88d9889af067ca9a863297e85ef0852279e866e074,PodSandboxId:69da9fe989feb262014dbed062c99c01ddb6f0f49330275c879b8da6a6151803,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744157834239012709,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 1cb1112b-5117-49e3-be71-3201c29071de,},Annotations:map[string]string{io.kubernetes.container.hash: 70b7ef2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e1a3acbd8088dc7c38380aa5a7b56ca25de0ac735007a31fa196144511f1039,PodSandboxId:51d470335727ec034a20461f0a063a3c3dfc30a63b9d8cc489e700798885a321,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744157833948360632,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sk6cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd
cce1de-8a82-43ab-a379-e1ce978dc6a9,},Annotations:map[string]string{io.kubernetes.container.hash: f85596ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1323c79f110d247a8e39500b239d1add627d90ca4edcb90a49671021ba8d4f3,PodSandboxId:d05ac3dcdd3fb450b8a5aaf2de730ab9b4dbb8ca795ff4f0e09d873c804b3b5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744157828964035109,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-623381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 605077489
eff7e8da7002295847181bf,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d143fa8864ce14d86a1ef9f8e03ca8225070eba6f23e09d2b150e7779f157ed7,PodSandboxId:613c7120ce8827aecdae334ab66d0b9a5028ffaec73af8d28d3415b8cc7d43ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744157828926322872,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-623381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 7119ebef5d04951c82ea0723930c5a79,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911f92f7691a350d3c7ab8ed3b48bcfc1f97cbe5fb1b3b3d7b9261c98125a873,PodSandboxId:c6722205c3aeea69868d7ee6b602fc0e86b65fad23cee02773ccfedaddcbb1b7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744157828909949666,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-623381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5136
e640834f4fc56eb5c3be3996efda,},Annotations:map[string]string{io.kubernetes.container.hash: da9eb463,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:944e9531d0b2b5f5056074d48d5f71d72ee6e108f0fafaaad49ad7c21f5a5ebd,PodSandboxId:6bf06cd8c4c03675e7195228c9994d90144a73083806515a9fef572a425ba500,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744157828893033322,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-623381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b21890771f4909f60b5ad0427076b90f,},Annotation
s:map[string]string{io.kubernetes.container.hash: 78d9aee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a7479036-8d53-4342-b79e-624dd7a4e159 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6ff94771a202c       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   6 seconds ago       Running             coredns                   1                   862ad0d765938       coredns-6d4b75cb6d-jmxk4
	735f9646e501b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Running             storage-provisioner       1                   69da9fe989feb       storage-provisioner
	1e1a3acbd8088       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   13 seconds ago      Running             kube-proxy                1                   51d470335727e       kube-proxy-sk6cc
	a1323c79f110d       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   18 seconds ago      Running             kube-scheduler            1                   d05ac3dcdd3fb       kube-scheduler-test-preload-623381
	d143fa8864ce1       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   18 seconds ago      Running             kube-controller-manager   1                   613c7120ce882       kube-controller-manager-test-preload-623381
	911f92f7691a3       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   18 seconds ago      Running             kube-apiserver            1                   c6722205c3aee       kube-apiserver-test-preload-623381
	944e9531d0b2b       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   18 seconds ago      Running             etcd                      1                   6bf06cd8c4c03       etcd-test-preload-623381
	
	
	==> coredns [6ff94771a202c0fac7abdcbddbad8952e27f569e1c4cc60fa2e2ac6b1864223d] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:49130 - 21935 "HINFO IN 641010631929575762.5857184792023457001. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013221673s
	
	
	==> describe nodes <==
	Name:               test-preload-623381
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-623381
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd2f4c3eba2bd452b5997c855e28d0966165ba83
	                    minikube.k8s.io/name=test-preload-623381
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_09T00_15_51_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Apr 2025 00:15:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-623381
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Apr 2025 00:17:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Apr 2025 00:17:23 +0000   Wed, 09 Apr 2025 00:15:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Apr 2025 00:17:23 +0000   Wed, 09 Apr 2025 00:15:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Apr 2025 00:17:23 +0000   Wed, 09 Apr 2025 00:15:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Apr 2025 00:17:23 +0000   Wed, 09 Apr 2025 00:17:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.104
	  Hostname:    test-preload-623381
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a407242f63064e0ca9cd8bdb42c7cda9
	  System UUID:                a407242f-6306-4e0c-a9cd-8bdb42c7cda9
	  Boot ID:                    d0f9916e-df33-47a1-b471-825d61ac5909
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-jmxk4                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     83s
	  kube-system                 etcd-test-preload-623381                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         96s
	  kube-system                 kube-apiserver-test-preload-623381             250m (12%)    0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-controller-manager-test-preload-623381    200m (10%)    0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-proxy-sk6cc                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-scheduler-test-preload-623381             100m (5%)     0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 13s                  kube-proxy       
	  Normal  Starting                 82s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  104s (x5 over 104s)  kubelet          Node test-preload-623381 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s (x4 over 104s)  kubelet          Node test-preload-623381 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s (x4 over 104s)  kubelet          Node test-preload-623381 status is now: NodeHasSufficientPID
	  Normal  Starting                 96s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  96s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  96s                  kubelet          Node test-preload-623381 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s                  kubelet          Node test-preload-623381 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s                  kubelet          Node test-preload-623381 status is now: NodeHasSufficientPID
	  Normal  NodeReady                86s                  kubelet          Node test-preload-623381 status is now: NodeReady
	  Normal  RegisteredNode           84s                  node-controller  Node test-preload-623381 event: Registered Node test-preload-623381 in Controller
	  Normal  Starting                 19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19s (x8 over 19s)    kubelet          Node test-preload-623381 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x8 over 19s)    kubelet          Node test-preload-623381 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x7 over 19s)    kubelet          Node test-preload-623381 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                   node-controller  Node test-preload-623381 event: Registered Node test-preload-623381 in Controller
	
	
	==> dmesg <==
	[Apr 9 00:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051257] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037644] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.825205] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.949867] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.542946] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.651135] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.058864] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059666] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.148908] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.146017] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.269574] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[Apr 9 00:17] systemd-fstab-generator[997]: Ignoring "noauto" option for root device
	[  +0.056299] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.846379] systemd-fstab-generator[1125]: Ignoring "noauto" option for root device
	[  +5.355428] kauditd_printk_skb: 105 callbacks suppressed
	[  +6.034068] systemd-fstab-generator[1774]: Ignoring "noauto" option for root device
	[  +0.098052] kauditd_printk_skb: 31 callbacks suppressed
	[  +6.233727] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [944e9531d0b2b5f5056074d48d5f71d72ee6e108f0fafaaad49ad7c21f5a5ebd] <==
	{"level":"info","ts":"2025-04-09T00:17:09.165Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"223628dc6b2f68bd","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-04-09T00:17:09.172Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-04-09T00:17:09.173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"223628dc6b2f68bd switched to configuration voters=(2465202773188110525)"}
	{"level":"info","ts":"2025-04-09T00:17:09.173Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bcba49d8b8764a98","local-member-id":"223628dc6b2f68bd","added-peer-id":"223628dc6b2f68bd","added-peer-peer-urls":["https://192.168.39.104:2380"]}
	{"level":"info","ts":"2025-04-09T00:17:09.173Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bcba49d8b8764a98","local-member-id":"223628dc6b2f68bd","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-09T00:17:09.173Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-09T00:17:09.181Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-09T00:17:09.183Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"223628dc6b2f68bd","initial-advertise-peer-urls":["https://192.168.39.104:2380"],"listen-peer-urls":["https://192.168.39.104:2380"],"advertise-client-urls":["https://192.168.39.104:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.104:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-09T00:17:09.182Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.104:2380"}
	{"level":"info","ts":"2025-04-09T00:17:09.185Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.104:2380"}
	{"level":"info","ts":"2025-04-09T00:17:09.183Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-09T00:17:10.720Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"223628dc6b2f68bd is starting a new election at term 2"}
	{"level":"info","ts":"2025-04-09T00:17:10.720Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"223628dc6b2f68bd became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-09T00:17:10.720Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"223628dc6b2f68bd received MsgPreVoteResp from 223628dc6b2f68bd at term 2"}
	{"level":"info","ts":"2025-04-09T00:17:10.720Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"223628dc6b2f68bd became candidate at term 3"}
	{"level":"info","ts":"2025-04-09T00:17:10.720Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"223628dc6b2f68bd received MsgVoteResp from 223628dc6b2f68bd at term 3"}
	{"level":"info","ts":"2025-04-09T00:17:10.720Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"223628dc6b2f68bd became leader at term 3"}
	{"level":"info","ts":"2025-04-09T00:17:10.720Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 223628dc6b2f68bd elected leader 223628dc6b2f68bd at term 3"}
	{"level":"info","ts":"2025-04-09T00:17:10.720Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"223628dc6b2f68bd","local-member-attributes":"{Name:test-preload-623381 ClientURLs:[https://192.168.39.104:2379]}","request-path":"/0/members/223628dc6b2f68bd/attributes","cluster-id":"bcba49d8b8764a98","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-09T00:17:10.720Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-09T00:17:10.722Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.104:2379"}
	{"level":"info","ts":"2025-04-09T00:17:10.722Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-09T00:17:10.723Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-09T00:17:10.723Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-09T00:17:10.724Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 00:17:27 up 0 min,  0 users,  load average: 0.88, 0.22, 0.07
	Linux test-preload-623381 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [911f92f7691a350d3c7ab8ed3b48bcfc1f97cbe5fb1b3b3d7b9261c98125a873] <==
	I0409 00:17:13.014223       1 controller.go:85] Starting OpenAPI V3 controller
	I0409 00:17:13.014251       1 naming_controller.go:291] Starting NamingConditionController
	I0409 00:17:13.014269       1 establishing_controller.go:76] Starting EstablishingController
	I0409 00:17:13.014494       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0409 00:17:13.014506       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0409 00:17:13.014521       1 crd_finalizer.go:266] Starting CRDFinalizer
	E0409 00:17:13.103330       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0409 00:17:13.114984       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0409 00:17:13.128269       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0409 00:17:13.130907       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0409 00:17:13.188055       1 cache.go:39] Caches are synced for autoregister controller
	I0409 00:17:13.189142       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0409 00:17:13.194266       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0409 00:17:13.194365       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0409 00:17:13.197120       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0409 00:17:13.671932       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0409 00:17:13.993268       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0409 00:17:14.394418       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0409 00:17:14.474397       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0409 00:17:14.501555       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0409 00:17:14.532451       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0409 00:17:14.546625       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0409 00:17:14.552709       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0409 00:17:25.695006       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0409 00:17:25.767453       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [d143fa8864ce14d86a1ef9f8e03ca8225070eba6f23e09d2b150e7779f157ed7] <==
	I0409 00:17:25.739298       1 shared_informer.go:262] Caches are synced for disruption
	I0409 00:17:25.739326       1 disruption.go:371] Sending events to api server.
	I0409 00:17:25.740492       1 shared_informer.go:262] Caches are synced for TTL
	I0409 00:17:25.741727       1 shared_informer.go:262] Caches are synced for stateful set
	I0409 00:17:25.742891       1 shared_informer.go:262] Caches are synced for job
	I0409 00:17:25.745691       1 shared_informer.go:262] Caches are synced for GC
	I0409 00:17:25.749991       1 shared_informer.go:262] Caches are synced for taint
	I0409 00:17:25.750268       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0409 00:17:25.750462       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-623381. Assuming now as a timestamp.
	I0409 00:17:25.750539       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0409 00:17:25.750973       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0409 00:17:25.753068       1 event.go:294] "Event occurred" object="test-preload-623381" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-623381 event: Registered Node test-preload-623381 in Controller"
	I0409 00:17:25.754463       1 shared_informer.go:262] Caches are synced for endpoint
	I0409 00:17:25.782737       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0409 00:17:25.787894       1 shared_informer.go:262] Caches are synced for daemon sets
	I0409 00:17:25.892422       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0409 00:17:25.917760       1 shared_informer.go:262] Caches are synced for resource quota
	I0409 00:17:25.932059       1 shared_informer.go:262] Caches are synced for resource quota
	I0409 00:17:25.945844       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0409 00:17:25.947005       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0409 00:17:25.947069       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0409 00:17:25.947116       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0409 00:17:26.383131       1 shared_informer.go:262] Caches are synced for garbage collector
	I0409 00:17:26.392681       1 shared_informer.go:262] Caches are synced for garbage collector
	I0409 00:17:26.392707       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [1e1a3acbd8088dc7c38380aa5a7b56ca25de0ac735007a31fa196144511f1039] <==
	I0409 00:17:14.282660       1 node.go:163] Successfully retrieved node IP: 192.168.39.104
	I0409 00:17:14.283066       1 server_others.go:138] "Detected node IP" address="192.168.39.104"
	I0409 00:17:14.283277       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0409 00:17:14.368922       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0409 00:17:14.368940       1 server_others.go:206] "Using iptables Proxier"
	I0409 00:17:14.369359       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0409 00:17:14.369975       1 server.go:661] "Version info" version="v1.24.4"
	I0409 00:17:14.370193       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0409 00:17:14.372081       1 config.go:317] "Starting service config controller"
	I0409 00:17:14.372180       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0409 00:17:14.372275       1 config.go:226] "Starting endpoint slice config controller"
	I0409 00:17:14.372298       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0409 00:17:14.382705       1 config.go:444] "Starting node config controller"
	I0409 00:17:14.382720       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0409 00:17:14.472904       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0409 00:17:14.473024       1 shared_informer.go:262] Caches are synced for service config
	I0409 00:17:14.483235       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [a1323c79f110d247a8e39500b239d1add627d90ca4edcb90a49671021ba8d4f3] <==
	I0409 00:17:09.545381       1 serving.go:348] Generated self-signed cert in-memory
	W0409 00:17:13.032012       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0409 00:17:13.032143       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0409 00:17:13.032215       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0409 00:17:13.032242       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0409 00:17:13.097913       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0409 00:17:13.097998       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0409 00:17:13.104491       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0409 00:17:13.104857       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0409 00:17:13.105064       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0409 00:17:13.107207       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0409 00:17:13.205784       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 09 00:17:13 test-preload-623381 kubelet[1132]: I0409 00:17:13.250164    1132 topology_manager.go:200] "Topology Admit Handler"
	Apr 09 00:17:13 test-preload-623381 kubelet[1132]: I0409 00:17:13.250625    1132 topology_manager.go:200] "Topology Admit Handler"
	Apr 09 00:17:13 test-preload-623381 kubelet[1132]: I0409 00:17:13.250705    1132 topology_manager.go:200] "Topology Admit Handler"
	Apr 09 00:17:13 test-preload-623381 kubelet[1132]: E0409 00:17:13.253414    1132 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-jmxk4" podUID=4128230e-7b53-4e2f-af8c-fef871743abc
	Apr 09 00:17:13 test-preload-623381 kubelet[1132]: E0409 00:17:13.282492    1132 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 09 00:17:13 test-preload-623381 kubelet[1132]: I0409 00:17:13.288690    1132 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftzq7\" (UniqueName: \"kubernetes.io/projected/1cb1112b-5117-49e3-be71-3201c29071de-kube-api-access-ftzq7\") pod \"storage-provisioner\" (UID: \"1cb1112b-5117-49e3-be71-3201c29071de\") " pod="kube-system/storage-provisioner"
	Apr 09 00:17:13 test-preload-623381 kubelet[1132]: I0409 00:17:13.288800    1132 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bdcce1de-8a82-43ab-a379-e1ce978dc6a9-lib-modules\") pod \"kube-proxy-sk6cc\" (UID: \"bdcce1de-8a82-43ab-a379-e1ce978dc6a9\") " pod="kube-system/kube-proxy-sk6cc"
	Apr 09 00:17:13 test-preload-623381 kubelet[1132]: I0409 00:17:13.288941    1132 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bdcce1de-8a82-43ab-a379-e1ce978dc6a9-xtables-lock\") pod \"kube-proxy-sk6cc\" (UID: \"bdcce1de-8a82-43ab-a379-e1ce978dc6a9\") " pod="kube-system/kube-proxy-sk6cc"
	Apr 09 00:17:13 test-preload-623381 kubelet[1132]: I0409 00:17:13.288981    1132 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4128230e-7b53-4e2f-af8c-fef871743abc-config-volume\") pod \"coredns-6d4b75cb6d-jmxk4\" (UID: \"4128230e-7b53-4e2f-af8c-fef871743abc\") " pod="kube-system/coredns-6d4b75cb6d-jmxk4"
	Apr 09 00:17:13 test-preload-623381 kubelet[1132]: I0409 00:17:13.289005    1132 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dt7k\" (UniqueName: \"kubernetes.io/projected/4128230e-7b53-4e2f-af8c-fef871743abc-kube-api-access-8dt7k\") pod \"coredns-6d4b75cb6d-jmxk4\" (UID: \"4128230e-7b53-4e2f-af8c-fef871743abc\") " pod="kube-system/coredns-6d4b75cb6d-jmxk4"
	Apr 09 00:17:13 test-preload-623381 kubelet[1132]: I0409 00:17:13.289026    1132 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bdcce1de-8a82-43ab-a379-e1ce978dc6a9-kube-proxy\") pod \"kube-proxy-sk6cc\" (UID: \"bdcce1de-8a82-43ab-a379-e1ce978dc6a9\") " pod="kube-system/kube-proxy-sk6cc"
	Apr 09 00:17:13 test-preload-623381 kubelet[1132]: I0409 00:17:13.289045    1132 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfwrn\" (UniqueName: \"kubernetes.io/projected/bdcce1de-8a82-43ab-a379-e1ce978dc6a9-kube-api-access-xfwrn\") pod \"kube-proxy-sk6cc\" (UID: \"bdcce1de-8a82-43ab-a379-e1ce978dc6a9\") " pod="kube-system/kube-proxy-sk6cc"
	Apr 09 00:17:13 test-preload-623381 kubelet[1132]: I0409 00:17:13.289062    1132 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1cb1112b-5117-49e3-be71-3201c29071de-tmp\") pod \"storage-provisioner\" (UID: \"1cb1112b-5117-49e3-be71-3201c29071de\") " pod="kube-system/storage-provisioner"
	Apr 09 00:17:13 test-preload-623381 kubelet[1132]: I0409 00:17:13.289090    1132 reconciler.go:159] "Reconciler: start to sync state"
	Apr 09 00:17:13 test-preload-623381 kubelet[1132]: E0409 00:17:13.393831    1132 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 09 00:17:13 test-preload-623381 kubelet[1132]: E0409 00:17:13.393997    1132 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/4128230e-7b53-4e2f-af8c-fef871743abc-config-volume podName:4128230e-7b53-4e2f-af8c-fef871743abc nodeName:}" failed. No retries permitted until 2025-04-09 00:17:13.893922075 +0000 UTC m=+5.804059688 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4128230e-7b53-4e2f-af8c-fef871743abc-config-volume") pod "coredns-6d4b75cb6d-jmxk4" (UID: "4128230e-7b53-4e2f-af8c-fef871743abc") : object "kube-system"/"coredns" not registered
	Apr 09 00:17:13 test-preload-623381 kubelet[1132]: E0409 00:17:13.896773    1132 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 09 00:17:13 test-preload-623381 kubelet[1132]: E0409 00:17:13.896839    1132 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/4128230e-7b53-4e2f-af8c-fef871743abc-config-volume podName:4128230e-7b53-4e2f-af8c-fef871743abc nodeName:}" failed. No retries permitted until 2025-04-09 00:17:14.896824705 +0000 UTC m=+6.806962299 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4128230e-7b53-4e2f-af8c-fef871743abc-config-volume") pod "coredns-6d4b75cb6d-jmxk4" (UID: "4128230e-7b53-4e2f-af8c-fef871743abc") : object "kube-system"/"coredns" not registered
	Apr 09 00:17:14 test-preload-623381 kubelet[1132]: I0409 00:17:14.327268    1132 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=e7b227e5-dfbe-4051-9f83-2e33d6f1adce path="/var/lib/kubelet/pods/e7b227e5-dfbe-4051-9f83-2e33d6f1adce/volumes"
	Apr 09 00:17:14 test-preload-623381 kubelet[1132]: E0409 00:17:14.905011    1132 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 09 00:17:14 test-preload-623381 kubelet[1132]: E0409 00:17:14.905101    1132 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/4128230e-7b53-4e2f-af8c-fef871743abc-config-volume podName:4128230e-7b53-4e2f-af8c-fef871743abc nodeName:}" failed. No retries permitted until 2025-04-09 00:17:16.905085432 +0000 UTC m=+8.815223027 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4128230e-7b53-4e2f-af8c-fef871743abc-config-volume") pod "coredns-6d4b75cb6d-jmxk4" (UID: "4128230e-7b53-4e2f-af8c-fef871743abc") : object "kube-system"/"coredns" not registered
	Apr 09 00:17:15 test-preload-623381 kubelet[1132]: E0409 00:17:15.319257    1132 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-jmxk4" podUID=4128230e-7b53-4e2f-af8c-fef871743abc
	Apr 09 00:17:16 test-preload-623381 kubelet[1132]: E0409 00:17:16.919145    1132 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 09 00:17:16 test-preload-623381 kubelet[1132]: E0409 00:17:16.919251    1132 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/4128230e-7b53-4e2f-af8c-fef871743abc-config-volume podName:4128230e-7b53-4e2f-af8c-fef871743abc nodeName:}" failed. No retries permitted until 2025-04-09 00:17:20.919232536 +0000 UTC m=+12.829370143 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4128230e-7b53-4e2f-af8c-fef871743abc-config-volume") pod "coredns-6d4b75cb6d-jmxk4" (UID: "4128230e-7b53-4e2f-af8c-fef871743abc") : object "kube-system"/"coredns" not registered
	Apr 09 00:17:17 test-preload-623381 kubelet[1132]: E0409 00:17:17.319382    1132 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-jmxk4" podUID=4128230e-7b53-4e2f-af8c-fef871743abc
	
	
	==> storage-provisioner [735f9646e501bcbf32db9a88d9889af067ca9a863297e85ef0852279e866e074] <==
	I0409 00:17:14.423161       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-623381 -n test-preload-623381
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-623381 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-623381" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-623381
--- FAIL: TestPreload (164.94s)

                                                
                                    
x
+
TestKubernetesUpgrade (397.27s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-636554 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-636554 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m10.275646645s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-636554] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20501
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20501-9125/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20501-9125/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-636554" primary control-plane node in "kubernetes-upgrade-636554" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0409 00:23:10.682301   60516 out.go:345] Setting OutFile to fd 1 ...
	I0409 00:23:10.682394   60516 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0409 00:23:10.682404   60516 out.go:358] Setting ErrFile to fd 2...
	I0409 00:23:10.682410   60516 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0409 00:23:10.682578   60516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20501-9125/.minikube/bin
	I0409 00:23:10.683133   60516 out.go:352] Setting JSON to false
	I0409 00:23:10.684025   60516 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7536,"bootTime":1744150655,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0409 00:23:10.684079   60516 start.go:139] virtualization: kvm guest
	I0409 00:23:10.686028   60516 out.go:177] * [kubernetes-upgrade-636554] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0409 00:23:10.687190   60516 out.go:177]   - MINIKUBE_LOCATION=20501
	I0409 00:23:10.687182   60516 notify.go:220] Checking for updates...
	I0409 00:23:10.688463   60516 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0409 00:23:10.689559   60516 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20501-9125/kubeconfig
	I0409 00:23:10.690619   60516 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20501-9125/.minikube
	I0409 00:23:10.691641   60516 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0409 00:23:10.692660   60516 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0409 00:23:10.694145   60516 config.go:182] Loaded profile config "NoKubernetes-006125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0409 00:23:10.694247   60516 config.go:182] Loaded profile config "cert-expiration-242018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0409 00:23:10.694323   60516 config.go:182] Loaded profile config "force-systemd-flag-660305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0409 00:23:10.694433   60516 driver.go:404] Setting default libvirt URI to qemu:///system
	I0409 00:23:10.729981   60516 out.go:177] * Using the kvm2 driver based on user configuration
	I0409 00:23:10.731108   60516 start.go:297] selected driver: kvm2
	I0409 00:23:10.731129   60516 start.go:901] validating driver "kvm2" against <nil>
	I0409 00:23:10.731143   60516 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0409 00:23:10.732130   60516 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0409 00:23:10.732223   60516 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20501-9125/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0409 00:23:10.747359   60516 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0409 00:23:10.747403   60516 start_flags.go:311] no existing cluster config was found, will generate one from the flags 
	I0409 00:23:10.747704   60516 start_flags.go:957] Wait components to verify : map[apiserver:true system_pods:true]
	I0409 00:23:10.747742   60516 cni.go:84] Creating CNI manager for ""
	I0409 00:23:10.747796   60516 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0409 00:23:10.747806   60516 start_flags.go:320] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0409 00:23:10.747860   60516 start.go:340] cluster config:
	{Name:kubernetes-upgrade-636554 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-636554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0409 00:23:10.747992   60516 iso.go:125] acquiring lock: {Name:mk618477bad490b102618c53c9c8c6b34f33ce81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0409 00:23:10.749618   60516 out.go:177] * Starting "kubernetes-upgrade-636554" primary control-plane node in "kubernetes-upgrade-636554" cluster
	I0409 00:23:10.750629   60516 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0409 00:23:10.750675   60516 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20501-9125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0409 00:23:10.750686   60516 cache.go:56] Caching tarball of preloaded images
	I0409 00:23:10.750760   60516 preload.go:172] Found /home/jenkins/minikube-integration/20501-9125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0409 00:23:10.750775   60516 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0409 00:23:10.750885   60516 profile.go:143] Saving config to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/config.json ...
	I0409 00:23:10.750961   60516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/config.json: {Name:mkebb117a3feedd4f5dc72e3e74f21229ec5b676 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:23:10.751149   60516 start.go:360] acquireMachinesLock for kubernetes-upgrade-636554: {Name:mke7be7b51cfddf557a39ecf6493fff6a1168ec9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0409 00:23:51.560720   60516 start.go:364] duration metric: took 40.809529433s to acquireMachinesLock for "kubernetes-upgrade-636554"
	I0409 00:23:51.560801   60516 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-636554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20
.0 ClusterName:kubernetes-upgrade-636554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0409 00:23:51.560907   60516 start.go:125] createHost starting for "" (driver="kvm2")
	I0409 00:23:51.562500   60516 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0409 00:23:51.562695   60516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0409 00:23:51.562758   60516 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0409 00:23:51.583361   60516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46771
	I0409 00:23:51.583924   60516 main.go:141] libmachine: () Calling .GetVersion
	I0409 00:23:51.584507   60516 main.go:141] libmachine: Using API Version  1
	I0409 00:23:51.584543   60516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0409 00:23:51.584945   60516 main.go:141] libmachine: () Calling .GetMachineName
	I0409 00:23:51.585195   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetMachineName
	I0409 00:23:51.585371   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .DriverName
	I0409 00:23:51.585549   60516 start.go:159] libmachine.API.Create for "kubernetes-upgrade-636554" (driver="kvm2")
	I0409 00:23:51.585579   60516 client.go:168] LocalClient.Create starting
	I0409 00:23:51.585623   60516 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem
	I0409 00:23:51.585662   60516 main.go:141] libmachine: Decoding PEM data...
	I0409 00:23:51.585685   60516 main.go:141] libmachine: Parsing certificate...
	I0409 00:23:51.585771   60516 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem
	I0409 00:23:51.585802   60516 main.go:141] libmachine: Decoding PEM data...
	I0409 00:23:51.585824   60516 main.go:141] libmachine: Parsing certificate...
	I0409 00:23:51.585846   60516 main.go:141] libmachine: Running pre-create checks...
	I0409 00:23:51.585861   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .PreCreateCheck
	I0409 00:23:51.586187   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetConfigRaw
	I0409 00:23:51.586600   60516 main.go:141] libmachine: Creating machine...
	I0409 00:23:51.586612   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .Create
	I0409 00:23:51.586775   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) creating KVM machine...
	I0409 00:23:51.586793   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) creating network...
	I0409 00:23:51.588304   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | found existing default KVM network
	I0409 00:23:51.589123   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | I0409 00:23:51.588953   61009 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:d7:34:3c} reservation:<nil>}
	I0409 00:23:51.590031   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | I0409 00:23:51.589943   61009 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00021f810}
	I0409 00:23:51.590081   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | created network xml: 
	I0409 00:23:51.590115   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | <network>
	I0409 00:23:51.590137   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG |   <name>mk-kubernetes-upgrade-636554</name>
	I0409 00:23:51.590144   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG |   <dns enable='no'/>
	I0409 00:23:51.590153   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG |   
	I0409 00:23:51.590160   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0409 00:23:51.590172   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG |     <dhcp>
	I0409 00:23:51.590181   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0409 00:23:51.590188   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG |     </dhcp>
	I0409 00:23:51.590193   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG |   </ip>
	I0409 00:23:51.590200   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG |   
	I0409 00:23:51.590206   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | </network>
	I0409 00:23:51.590216   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | 
	I0409 00:23:51.595463   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | trying to create private KVM network mk-kubernetes-upgrade-636554 192.168.50.0/24...
	I0409 00:23:51.676421   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | private KVM network mk-kubernetes-upgrade-636554 192.168.50.0/24 created
	I0409 00:23:51.676476   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) setting up store path in /home/jenkins/minikube-integration/20501-9125/.minikube/machines/kubernetes-upgrade-636554 ...
	I0409 00:23:51.676486   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) building disk image from file:///home/jenkins/minikube-integration/20501-9125/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0409 00:23:51.676501   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | I0409 00:23:51.676438   61009 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20501-9125/.minikube
	I0409 00:23:51.676624   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Downloading /home/jenkins/minikube-integration/20501-9125/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20501-9125/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0409 00:23:51.958779   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | I0409 00:23:51.958650   61009 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/kubernetes-upgrade-636554/id_rsa...
	I0409 00:23:52.415349   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | I0409 00:23:52.415223   61009 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/kubernetes-upgrade-636554/kubernetes-upgrade-636554.rawdisk...
	I0409 00:23:52.415383   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | Writing magic tar header
	I0409 00:23:52.415401   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | Writing SSH key tar header
	I0409 00:23:52.415414   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | I0409 00:23:52.415340   61009 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20501-9125/.minikube/machines/kubernetes-upgrade-636554 ...
	I0409 00:23:52.415430   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/kubernetes-upgrade-636554
	I0409 00:23:52.415498   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) setting executable bit set on /home/jenkins/minikube-integration/20501-9125/.minikube/machines/kubernetes-upgrade-636554 (perms=drwx------)
	I0409 00:23:52.415532   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) setting executable bit set on /home/jenkins/minikube-integration/20501-9125/.minikube/machines (perms=drwxr-xr-x)
	I0409 00:23:52.415545   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20501-9125/.minikube/machines
	I0409 00:23:52.415565   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20501-9125/.minikube
	I0409 00:23:52.415589   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20501-9125
	I0409 00:23:52.415605   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) setting executable bit set on /home/jenkins/minikube-integration/20501-9125/.minikube (perms=drwxr-xr-x)
	I0409 00:23:52.415623   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) setting executable bit set on /home/jenkins/minikube-integration/20501-9125 (perms=drwxrwxr-x)
	I0409 00:23:52.415637   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0409 00:23:52.415650   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0409 00:23:52.415658   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) creating domain...
	I0409 00:23:52.415683   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0409 00:23:52.415704   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | checking permissions on dir: /home/jenkins
	I0409 00:23:52.415743   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | checking permissions on dir: /home
	I0409 00:23:52.415758   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | skipping /home - not owner
	I0409 00:23:52.417014   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) define libvirt domain using xml: 
	I0409 00:23:52.417034   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) <domain type='kvm'>
	I0409 00:23:52.417044   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)   <name>kubernetes-upgrade-636554</name>
	I0409 00:23:52.417055   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)   <memory unit='MiB'>2200</memory>
	I0409 00:23:52.417064   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)   <vcpu>2</vcpu>
	I0409 00:23:52.417071   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)   <features>
	I0409 00:23:52.417080   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)     <acpi/>
	I0409 00:23:52.417092   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)     <apic/>
	I0409 00:23:52.417098   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)     <pae/>
	I0409 00:23:52.417105   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)     
	I0409 00:23:52.417113   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)   </features>
	I0409 00:23:52.417119   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)   <cpu mode='host-passthrough'>
	I0409 00:23:52.417130   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)   
	I0409 00:23:52.417148   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)   </cpu>
	I0409 00:23:52.417160   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)   <os>
	I0409 00:23:52.417167   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)     <type>hvm</type>
	I0409 00:23:52.417175   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)     <boot dev='cdrom'/>
	I0409 00:23:52.417183   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)     <boot dev='hd'/>
	I0409 00:23:52.417192   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)     <bootmenu enable='no'/>
	I0409 00:23:52.417198   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)   </os>
	I0409 00:23:52.417206   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)   <devices>
	I0409 00:23:52.417213   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)     <disk type='file' device='cdrom'>
	I0409 00:23:52.417229   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)       <source file='/home/jenkins/minikube-integration/20501-9125/.minikube/machines/kubernetes-upgrade-636554/boot2docker.iso'/>
	I0409 00:23:52.417249   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)       <target dev='hdc' bus='scsi'/>
	I0409 00:23:52.417281   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)       <readonly/>
	I0409 00:23:52.417311   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)     </disk>
	I0409 00:23:52.417324   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)     <disk type='file' device='disk'>
	I0409 00:23:52.417344   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0409 00:23:52.417363   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)       <source file='/home/jenkins/minikube-integration/20501-9125/.minikube/machines/kubernetes-upgrade-636554/kubernetes-upgrade-636554.rawdisk'/>
	I0409 00:23:52.417375   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)       <target dev='hda' bus='virtio'/>
	I0409 00:23:52.417389   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)     </disk>
	I0409 00:23:52.417402   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)     <interface type='network'>
	I0409 00:23:52.417423   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)       <source network='mk-kubernetes-upgrade-636554'/>
	I0409 00:23:52.417437   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)       <model type='virtio'/>
	I0409 00:23:52.417449   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)     </interface>
	I0409 00:23:52.417459   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)     <interface type='network'>
	I0409 00:23:52.417470   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)       <source network='default'/>
	I0409 00:23:52.417479   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)       <model type='virtio'/>
	I0409 00:23:52.417489   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)     </interface>
	I0409 00:23:52.417499   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)     <serial type='pty'>
	I0409 00:23:52.417507   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)       <target port='0'/>
	I0409 00:23:52.417516   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)     </serial>
	I0409 00:23:52.417536   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)     <console type='pty'>
	I0409 00:23:52.417547   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)       <target type='serial' port='0'/>
	I0409 00:23:52.417558   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)     </console>
	I0409 00:23:52.417568   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)     <rng model='virtio'>
	I0409 00:23:52.417578   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)       <backend model='random'>/dev/random</backend>
	I0409 00:23:52.417587   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)     </rng>
	I0409 00:23:52.417595   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)     
	I0409 00:23:52.417605   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)     
	I0409 00:23:52.417613   60516 main.go:141] libmachine: (kubernetes-upgrade-636554)   </devices>
	I0409 00:23:52.417623   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) </domain>
	I0409 00:23:52.417632   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) 
	I0409 00:23:52.425286   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:df:12:4f in network default
	I0409 00:23:52.426197   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:23:52.426234   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) starting domain...
	I0409 00:23:52.426257   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) ensuring networks are active...
	I0409 00:23:52.427306   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Ensuring network default is active
	I0409 00:23:52.427746   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Ensuring network mk-kubernetes-upgrade-636554 is active
	I0409 00:23:52.428555   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) getting domain XML...
	I0409 00:23:52.429382   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) creating domain...
	I0409 00:23:53.672011   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) waiting for IP...
	I0409 00:23:53.674096   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:23:53.674731   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | unable to find current IP address of domain kubernetes-upgrade-636554 in network mk-kubernetes-upgrade-636554
	I0409 00:23:53.674832   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | I0409 00:23:53.674751   61009 retry.go:31] will retry after 279.331037ms: waiting for domain to come up
	I0409 00:23:53.955419   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:23:53.955928   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | unable to find current IP address of domain kubernetes-upgrade-636554 in network mk-kubernetes-upgrade-636554
	I0409 00:23:53.956011   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | I0409 00:23:53.955918   61009 retry.go:31] will retry after 278.168963ms: waiting for domain to come up
	I0409 00:23:54.425749   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:23:54.426174   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | unable to find current IP address of domain kubernetes-upgrade-636554 in network mk-kubernetes-upgrade-636554
	I0409 00:23:54.426243   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | I0409 00:23:54.426170   61009 retry.go:31] will retry after 439.778159ms: waiting for domain to come up
	I0409 00:23:54.868102   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:23:54.868736   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | unable to find current IP address of domain kubernetes-upgrade-636554 in network mk-kubernetes-upgrade-636554
	I0409 00:23:54.868755   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | I0409 00:23:54.868715   61009 retry.go:31] will retry after 525.969825ms: waiting for domain to come up
	I0409 00:23:55.396428   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:23:55.396953   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | unable to find current IP address of domain kubernetes-upgrade-636554 in network mk-kubernetes-upgrade-636554
	I0409 00:23:55.396979   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | I0409 00:23:55.396927   61009 retry.go:31] will retry after 546.269721ms: waiting for domain to come up
	I0409 00:23:55.944459   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:23:55.944990   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | unable to find current IP address of domain kubernetes-upgrade-636554 in network mk-kubernetes-upgrade-636554
	I0409 00:23:55.945016   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | I0409 00:23:55.944947   61009 retry.go:31] will retry after 827.086441ms: waiting for domain to come up
	I0409 00:23:56.773443   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:23:56.773966   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | unable to find current IP address of domain kubernetes-upgrade-636554 in network mk-kubernetes-upgrade-636554
	I0409 00:23:56.773995   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | I0409 00:23:56.773902   61009 retry.go:31] will retry after 853.132781ms: waiting for domain to come up
	I0409 00:23:57.628877   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:23:57.629366   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | unable to find current IP address of domain kubernetes-upgrade-636554 in network mk-kubernetes-upgrade-636554
	I0409 00:23:57.629383   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | I0409 00:23:57.629325   61009 retry.go:31] will retry after 1.094336898s: waiting for domain to come up
	I0409 00:23:58.724931   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:23:58.725548   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | unable to find current IP address of domain kubernetes-upgrade-636554 in network mk-kubernetes-upgrade-636554
	I0409 00:23:58.725578   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | I0409 00:23:58.725497   61009 retry.go:31] will retry after 1.121782577s: waiting for domain to come up
	I0409 00:23:59.848340   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:23:59.848871   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | unable to find current IP address of domain kubernetes-upgrade-636554 in network mk-kubernetes-upgrade-636554
	I0409 00:23:59.848897   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | I0409 00:23:59.848846   61009 retry.go:31] will retry after 1.844283272s: waiting for domain to come up
	I0409 00:24:01.694482   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:01.694881   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | unable to find current IP address of domain kubernetes-upgrade-636554 in network mk-kubernetes-upgrade-636554
	I0409 00:24:01.694950   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | I0409 00:24:01.694876   61009 retry.go:31] will retry after 2.635339466s: waiting for domain to come up
	I0409 00:24:04.332911   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:04.333557   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | unable to find current IP address of domain kubernetes-upgrade-636554 in network mk-kubernetes-upgrade-636554
	I0409 00:24:04.333578   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | I0409 00:24:04.333535   61009 retry.go:31] will retry after 3.636815586s: waiting for domain to come up
	I0409 00:24:07.972698   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:07.973089   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | unable to find current IP address of domain kubernetes-upgrade-636554 in network mk-kubernetes-upgrade-636554
	I0409 00:24:07.973118   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | I0409 00:24:07.973053   61009 retry.go:31] will retry after 3.349577628s: waiting for domain to come up
	I0409 00:24:11.326417   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:11.326830   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | unable to find current IP address of domain kubernetes-upgrade-636554 in network mk-kubernetes-upgrade-636554
	I0409 00:24:11.326851   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | I0409 00:24:11.326796   61009 retry.go:31] will retry after 3.439170132s: waiting for domain to come up
	I0409 00:24:14.769694   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:14.770204   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) found domain IP: 192.168.50.37
	I0409 00:24:14.770233   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has current primary IP address 192.168.50.37 and MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:14.770238   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) reserving static IP address...
	I0409 00:24:14.770573   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-636554", mac: "52:54:00:42:e7:ae", ip: "192.168.50.37"} in network mk-kubernetes-upgrade-636554
	I0409 00:24:14.842634   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) reserved static IP address 192.168.50.37 for domain kubernetes-upgrade-636554
	I0409 00:24:14.842663   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | Getting to WaitForSSH function...
	I0409 00:24:14.842672   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) waiting for SSH...
	I0409 00:24:14.845357   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:14.845691   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:e7:ae", ip: ""} in network mk-kubernetes-upgrade-636554: {Iface:virbr2 ExpiryTime:2025-04-09 01:24:06 +0000 UTC Type:0 Mac:52:54:00:42:e7:ae Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:minikube Clientid:01:52:54:00:42:e7:ae}
	I0409 00:24:14.845713   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined IP address 192.168.50.37 and MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:14.845845   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | Using SSH client type: external
	I0409 00:24:14.845873   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | Using SSH private key: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/kubernetes-upgrade-636554/id_rsa (-rw-------)
	I0409 00:24:14.845914   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.37 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20501-9125/.minikube/machines/kubernetes-upgrade-636554/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0409 00:24:14.845933   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | About to run SSH command:
	I0409 00:24:14.845949   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | exit 0
	I0409 00:24:14.971492   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | SSH cmd err, output: <nil>: 
	I0409 00:24:14.971739   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) KVM machine creation complete
	I0409 00:24:14.972005   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetConfigRaw
	I0409 00:24:14.972557   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .DriverName
	I0409 00:24:14.972746   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .DriverName
	I0409 00:24:14.972877   60516 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0409 00:24:14.972888   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetState
	I0409 00:24:14.974039   60516 main.go:141] libmachine: Detecting operating system of created instance...
	I0409 00:24:14.974055   60516 main.go:141] libmachine: Waiting for SSH to be available...
	I0409 00:24:14.974063   60516 main.go:141] libmachine: Getting to WaitForSSH function...
	I0409 00:24:14.974071   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHHostname
	I0409 00:24:14.976208   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:14.976551   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:e7:ae", ip: ""} in network mk-kubernetes-upgrade-636554: {Iface:virbr2 ExpiryTime:2025-04-09 01:24:06 +0000 UTC Type:0 Mac:52:54:00:42:e7:ae Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:kubernetes-upgrade-636554 Clientid:01:52:54:00:42:e7:ae}
	I0409 00:24:14.976572   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined IP address 192.168.50.37 and MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:14.976731   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHPort
	I0409 00:24:14.976896   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHKeyPath
	I0409 00:24:14.977027   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHKeyPath
	I0409 00:24:14.977148   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHUsername
	I0409 00:24:14.977306   60516 main.go:141] libmachine: Using SSH client type: native
	I0409 00:24:14.977586   60516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.37 22 <nil> <nil>}
	I0409 00:24:14.977598   60516 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0409 00:24:15.086841   60516 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0409 00:24:15.086867   60516 main.go:141] libmachine: Detecting the provisioner...
	I0409 00:24:15.086886   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHHostname
	I0409 00:24:15.089596   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:15.089977   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:e7:ae", ip: ""} in network mk-kubernetes-upgrade-636554: {Iface:virbr2 ExpiryTime:2025-04-09 01:24:06 +0000 UTC Type:0 Mac:52:54:00:42:e7:ae Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:kubernetes-upgrade-636554 Clientid:01:52:54:00:42:e7:ae}
	I0409 00:24:15.090007   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined IP address 192.168.50.37 and MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:15.090107   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHPort
	I0409 00:24:15.090304   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHKeyPath
	I0409 00:24:15.090473   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHKeyPath
	I0409 00:24:15.090619   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHUsername
	I0409 00:24:15.090769   60516 main.go:141] libmachine: Using SSH client type: native
	I0409 00:24:15.090971   60516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.37 22 <nil> <nil>}
	I0409 00:24:15.090982   60516 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0409 00:24:15.200812   60516 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0409 00:24:15.200896   60516 main.go:141] libmachine: found compatible host: buildroot
	I0409 00:24:15.200906   60516 main.go:141] libmachine: Provisioning with buildroot...
	I0409 00:24:15.200922   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetMachineName
	I0409 00:24:15.201135   60516 buildroot.go:166] provisioning hostname "kubernetes-upgrade-636554"
	I0409 00:24:15.201178   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetMachineName
	I0409 00:24:15.201351   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHHostname
	I0409 00:24:15.203835   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:15.204242   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:e7:ae", ip: ""} in network mk-kubernetes-upgrade-636554: {Iface:virbr2 ExpiryTime:2025-04-09 01:24:06 +0000 UTC Type:0 Mac:52:54:00:42:e7:ae Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:kubernetes-upgrade-636554 Clientid:01:52:54:00:42:e7:ae}
	I0409 00:24:15.204267   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined IP address 192.168.50.37 and MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:15.204466   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHPort
	I0409 00:24:15.204628   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHKeyPath
	I0409 00:24:15.204783   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHKeyPath
	I0409 00:24:15.204923   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHUsername
	I0409 00:24:15.205084   60516 main.go:141] libmachine: Using SSH client type: native
	I0409 00:24:15.205376   60516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.37 22 <nil> <nil>}
	I0409 00:24:15.205401   60516 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-636554 && echo "kubernetes-upgrade-636554" | sudo tee /etc/hostname
	I0409 00:24:15.329696   60516 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-636554
	
	I0409 00:24:15.329730   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHHostname
	I0409 00:24:15.332086   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:15.332375   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:e7:ae", ip: ""} in network mk-kubernetes-upgrade-636554: {Iface:virbr2 ExpiryTime:2025-04-09 01:24:06 +0000 UTC Type:0 Mac:52:54:00:42:e7:ae Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:kubernetes-upgrade-636554 Clientid:01:52:54:00:42:e7:ae}
	I0409 00:24:15.332408   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined IP address 192.168.50.37 and MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:15.332530   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHPort
	I0409 00:24:15.332714   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHKeyPath
	I0409 00:24:15.332892   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHKeyPath
	I0409 00:24:15.333030   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHUsername
	I0409 00:24:15.333198   60516 main.go:141] libmachine: Using SSH client type: native
	I0409 00:24:15.333416   60516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.37 22 <nil> <nil>}
	I0409 00:24:15.333440   60516 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-636554' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-636554/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-636554' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0409 00:24:15.451974   60516 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0409 00:24:15.452012   60516 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20501-9125/.minikube CaCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20501-9125/.minikube}
	I0409 00:24:15.452028   60516 buildroot.go:174] setting up certificates
	I0409 00:24:15.452035   60516 provision.go:84] configureAuth start
	I0409 00:24:15.452045   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetMachineName
	I0409 00:24:15.452318   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetIP
	I0409 00:24:15.455001   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:15.455316   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:e7:ae", ip: ""} in network mk-kubernetes-upgrade-636554: {Iface:virbr2 ExpiryTime:2025-04-09 01:24:06 +0000 UTC Type:0 Mac:52:54:00:42:e7:ae Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:kubernetes-upgrade-636554 Clientid:01:52:54:00:42:e7:ae}
	I0409 00:24:15.455336   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined IP address 192.168.50.37 and MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:15.455490   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHHostname
	I0409 00:24:15.457528   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:15.457807   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:e7:ae", ip: ""} in network mk-kubernetes-upgrade-636554: {Iface:virbr2 ExpiryTime:2025-04-09 01:24:06 +0000 UTC Type:0 Mac:52:54:00:42:e7:ae Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:kubernetes-upgrade-636554 Clientid:01:52:54:00:42:e7:ae}
	I0409 00:24:15.457835   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined IP address 192.168.50.37 and MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:15.457928   60516 provision.go:143] copyHostCerts
	I0409 00:24:15.457990   60516 exec_runner.go:144] found /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem, removing ...
	I0409 00:24:15.458008   60516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem
	I0409 00:24:15.458062   60516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem (1082 bytes)
	I0409 00:24:15.458161   60516 exec_runner.go:144] found /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem, removing ...
	I0409 00:24:15.458170   60516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem
	I0409 00:24:15.458190   60516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem (1123 bytes)
	I0409 00:24:15.458256   60516 exec_runner.go:144] found /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem, removing ...
	I0409 00:24:15.458263   60516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem
	I0409 00:24:15.458282   60516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem (1675 bytes)
	I0409 00:24:15.458338   60516 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-636554 san=[127.0.0.1 192.168.50.37 kubernetes-upgrade-636554 localhost minikube]
	I0409 00:24:15.719487   60516 provision.go:177] copyRemoteCerts
	I0409 00:24:15.719549   60516 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0409 00:24:15.719570   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHHostname
	I0409 00:24:15.722343   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:15.722630   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:e7:ae", ip: ""} in network mk-kubernetes-upgrade-636554: {Iface:virbr2 ExpiryTime:2025-04-09 01:24:06 +0000 UTC Type:0 Mac:52:54:00:42:e7:ae Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:kubernetes-upgrade-636554 Clientid:01:52:54:00:42:e7:ae}
	I0409 00:24:15.722658   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined IP address 192.168.50.37 and MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:15.722814   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHPort
	I0409 00:24:15.723004   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHKeyPath
	I0409 00:24:15.723155   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHUsername
	I0409 00:24:15.723291   60516 sshutil.go:53] new ssh client: &{IP:192.168.50.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/kubernetes-upgrade-636554/id_rsa Username:docker}
	I0409 00:24:15.809500   60516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0409 00:24:15.835560   60516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0409 00:24:15.860174   60516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0409 00:24:15.881389   60516 provision.go:87] duration metric: took 429.340825ms to configureAuth
	I0409 00:24:15.881416   60516 buildroot.go:189] setting minikube options for container-runtime
	I0409 00:24:15.881594   60516 config.go:182] Loaded profile config "kubernetes-upgrade-636554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0409 00:24:15.881685   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHHostname
	I0409 00:24:15.884329   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:15.884694   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:e7:ae", ip: ""} in network mk-kubernetes-upgrade-636554: {Iface:virbr2 ExpiryTime:2025-04-09 01:24:06 +0000 UTC Type:0 Mac:52:54:00:42:e7:ae Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:kubernetes-upgrade-636554 Clientid:01:52:54:00:42:e7:ae}
	I0409 00:24:15.884726   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined IP address 192.168.50.37 and MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:15.884864   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHPort
	I0409 00:24:15.885074   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHKeyPath
	I0409 00:24:15.885239   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHKeyPath
	I0409 00:24:15.885406   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHUsername
	I0409 00:24:15.885596   60516 main.go:141] libmachine: Using SSH client type: native
	I0409 00:24:15.885786   60516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.37 22 <nil> <nil>}
	I0409 00:24:15.885799   60516 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0409 00:24:16.127817   60516 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0409 00:24:16.127859   60516 main.go:141] libmachine: Checking connection to Docker...
	I0409 00:24:16.127894   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetURL
	I0409 00:24:16.129187   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | using libvirt version 6000000
	I0409 00:24:16.131400   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:16.131766   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:e7:ae", ip: ""} in network mk-kubernetes-upgrade-636554: {Iface:virbr2 ExpiryTime:2025-04-09 01:24:06 +0000 UTC Type:0 Mac:52:54:00:42:e7:ae Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:kubernetes-upgrade-636554 Clientid:01:52:54:00:42:e7:ae}
	I0409 00:24:16.131818   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined IP address 192.168.50.37 and MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:16.131971   60516 main.go:141] libmachine: Docker is up and running!
	I0409 00:24:16.131988   60516 main.go:141] libmachine: Reticulating splines...
	I0409 00:24:16.131995   60516 client.go:171] duration metric: took 24.546408436s to LocalClient.Create
	I0409 00:24:16.132015   60516 start.go:167] duration metric: took 24.546468078s to libmachine.API.Create "kubernetes-upgrade-636554"
	I0409 00:24:16.132027   60516 start.go:293] postStartSetup for "kubernetes-upgrade-636554" (driver="kvm2")
	I0409 00:24:16.132038   60516 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0409 00:24:16.132056   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .DriverName
	I0409 00:24:16.132307   60516 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0409 00:24:16.132344   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHHostname
	I0409 00:24:16.134214   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:16.134477   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:e7:ae", ip: ""} in network mk-kubernetes-upgrade-636554: {Iface:virbr2 ExpiryTime:2025-04-09 01:24:06 +0000 UTC Type:0 Mac:52:54:00:42:e7:ae Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:kubernetes-upgrade-636554 Clientid:01:52:54:00:42:e7:ae}
	I0409 00:24:16.134500   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined IP address 192.168.50.37 and MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:16.134614   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHPort
	I0409 00:24:16.134783   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHKeyPath
	I0409 00:24:16.134945   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHUsername
	I0409 00:24:16.135122   60516 sshutil.go:53] new ssh client: &{IP:192.168.50.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/kubernetes-upgrade-636554/id_rsa Username:docker}
	I0409 00:24:16.217248   60516 ssh_runner.go:195] Run: cat /etc/os-release
	I0409 00:24:16.220972   60516 info.go:137] Remote host: Buildroot 2023.02.9
	I0409 00:24:16.220998   60516 filesync.go:126] Scanning /home/jenkins/minikube-integration/20501-9125/.minikube/addons for local assets ...
	I0409 00:24:16.221076   60516 filesync.go:126] Scanning /home/jenkins/minikube-integration/20501-9125/.minikube/files for local assets ...
	I0409 00:24:16.221163   60516 filesync.go:149] local asset: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem -> 163142.pem in /etc/ssl/certs
	I0409 00:24:16.221256   60516 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0409 00:24:16.229458   60516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem --> /etc/ssl/certs/163142.pem (1708 bytes)
	I0409 00:24:16.251453   60516 start.go:296] duration metric: took 119.414459ms for postStartSetup
	I0409 00:24:16.251513   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetConfigRaw
	I0409 00:24:16.252076   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetIP
	I0409 00:24:16.254562   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:16.254906   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:e7:ae", ip: ""} in network mk-kubernetes-upgrade-636554: {Iface:virbr2 ExpiryTime:2025-04-09 01:24:06 +0000 UTC Type:0 Mac:52:54:00:42:e7:ae Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:kubernetes-upgrade-636554 Clientid:01:52:54:00:42:e7:ae}
	I0409 00:24:16.254946   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined IP address 192.168.50.37 and MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:16.255132   60516 profile.go:143] Saving config to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/config.json ...
	I0409 00:24:16.255337   60516 start.go:128] duration metric: took 24.694417609s to createHost
	I0409 00:24:16.255363   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHHostname
	I0409 00:24:16.257371   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:16.257629   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:e7:ae", ip: ""} in network mk-kubernetes-upgrade-636554: {Iface:virbr2 ExpiryTime:2025-04-09 01:24:06 +0000 UTC Type:0 Mac:52:54:00:42:e7:ae Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:kubernetes-upgrade-636554 Clientid:01:52:54:00:42:e7:ae}
	I0409 00:24:16.257649   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined IP address 192.168.50.37 and MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:16.257826   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHPort
	I0409 00:24:16.258015   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHKeyPath
	I0409 00:24:16.258202   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHKeyPath
	I0409 00:24:16.258384   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHUsername
	I0409 00:24:16.258521   60516 main.go:141] libmachine: Using SSH client type: native
	I0409 00:24:16.258713   60516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.37 22 <nil> <nil>}
	I0409 00:24:16.258723   60516 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0409 00:24:16.368318   60516 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744158256.345306657
	
	I0409 00:24:16.368337   60516 fix.go:216] guest clock: 1744158256.345306657
	I0409 00:24:16.368344   60516 fix.go:229] Guest: 2025-04-09 00:24:16.345306657 +0000 UTC Remote: 2025-04-09 00:24:16.255348879 +0000 UTC m=+65.611818639 (delta=89.957778ms)
	I0409 00:24:16.368361   60516 fix.go:200] guest clock delta is within tolerance: 89.957778ms
	I0409 00:24:16.368366   60516 start.go:83] releasing machines lock for "kubernetes-upgrade-636554", held for 24.807617935s
	I0409 00:24:16.368386   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .DriverName
	I0409 00:24:16.368667   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetIP
	I0409 00:24:16.371309   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:16.371700   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:e7:ae", ip: ""} in network mk-kubernetes-upgrade-636554: {Iface:virbr2 ExpiryTime:2025-04-09 01:24:06 +0000 UTC Type:0 Mac:52:54:00:42:e7:ae Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:kubernetes-upgrade-636554 Clientid:01:52:54:00:42:e7:ae}
	I0409 00:24:16.371730   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined IP address 192.168.50.37 and MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:16.371919   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .DriverName
	I0409 00:24:16.372366   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .DriverName
	I0409 00:24:16.372551   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .DriverName
	I0409 00:24:16.372619   60516 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0409 00:24:16.372661   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHHostname
	I0409 00:24:16.372789   60516 ssh_runner.go:195] Run: cat /version.json
	I0409 00:24:16.372813   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHHostname
	I0409 00:24:16.375369   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:16.375542   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:16.375732   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:e7:ae", ip: ""} in network mk-kubernetes-upgrade-636554: {Iface:virbr2 ExpiryTime:2025-04-09 01:24:06 +0000 UTC Type:0 Mac:52:54:00:42:e7:ae Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:kubernetes-upgrade-636554 Clientid:01:52:54:00:42:e7:ae}
	I0409 00:24:16.375768   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined IP address 192.168.50.37 and MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:16.375924   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHPort
	I0409 00:24:16.375938   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:e7:ae", ip: ""} in network mk-kubernetes-upgrade-636554: {Iface:virbr2 ExpiryTime:2025-04-09 01:24:06 +0000 UTC Type:0 Mac:52:54:00:42:e7:ae Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:kubernetes-upgrade-636554 Clientid:01:52:54:00:42:e7:ae}
	I0409 00:24:16.375960   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined IP address 192.168.50.37 and MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:16.376086   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHPort
	I0409 00:24:16.376195   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHKeyPath
	I0409 00:24:16.376199   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHKeyPath
	I0409 00:24:16.376441   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHUsername
	I0409 00:24:16.376445   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHUsername
	I0409 00:24:16.376600   60516 sshutil.go:53] new ssh client: &{IP:192.168.50.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/kubernetes-upgrade-636554/id_rsa Username:docker}
	I0409 00:24:16.376605   60516 sshutil.go:53] new ssh client: &{IP:192.168.50.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/kubernetes-upgrade-636554/id_rsa Username:docker}
	I0409 00:24:16.489701   60516 ssh_runner.go:195] Run: systemctl --version
	I0409 00:24:16.495547   60516 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0409 00:24:16.647176   60516 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0409 00:24:16.653085   60516 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0409 00:24:16.653139   60516 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0409 00:24:16.667901   60516 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0409 00:24:16.667927   60516 start.go:495] detecting cgroup driver to use...
	I0409 00:24:16.668003   60516 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0409 00:24:16.684783   60516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0409 00:24:16.697615   60516 docker.go:217] disabling cri-docker service (if available) ...
	I0409 00:24:16.697668   60516 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0409 00:24:16.709746   60516 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0409 00:24:16.721789   60516 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0409 00:24:16.841435   60516 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0409 00:24:17.028101   60516 docker.go:233] disabling docker service ...
	I0409 00:24:17.028179   60516 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0409 00:24:17.042887   60516 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0409 00:24:17.057045   60516 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0409 00:24:17.198391   60516 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0409 00:24:17.328532   60516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0409 00:24:17.344192   60516 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0409 00:24:17.363361   60516 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0409 00:24:17.363433   60516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0409 00:24:17.373299   60516 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0409 00:24:17.373388   60516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0409 00:24:17.383501   60516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0409 00:24:17.393348   60516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0409 00:24:17.402959   60516 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0409 00:24:17.413667   60516 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0409 00:24:17.423848   60516 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0409 00:24:17.423921   60516 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0409 00:24:17.435662   60516 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0409 00:24:17.447477   60516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 00:24:17.563704   60516 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0409 00:24:17.664939   60516 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0409 00:24:17.664998   60516 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0409 00:24:17.669581   60516 start.go:563] Will wait 60s for crictl version
	I0409 00:24:17.669639   60516 ssh_runner.go:195] Run: which crictl
	I0409 00:24:17.673114   60516 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0409 00:24:17.709374   60516 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0409 00:24:17.709473   60516 ssh_runner.go:195] Run: crio --version
	I0409 00:24:17.735300   60516 ssh_runner.go:195] Run: crio --version
	I0409 00:24:17.767616   60516 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0409 00:24:17.768935   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetIP
	I0409 00:24:17.771942   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:17.772332   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:e7:ae", ip: ""} in network mk-kubernetes-upgrade-636554: {Iface:virbr2 ExpiryTime:2025-04-09 01:24:06 +0000 UTC Type:0 Mac:52:54:00:42:e7:ae Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:kubernetes-upgrade-636554 Clientid:01:52:54:00:42:e7:ae}
	I0409 00:24:17.772364   60516 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined IP address 192.168.50.37 and MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:24:17.772559   60516 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0409 00:24:17.776728   60516 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0409 00:24:17.789352   60516 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-636554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kube
rnetes-upgrade-636554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.37 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0409 00:24:17.789482   60516 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0409 00:24:17.789553   60516 ssh_runner.go:195] Run: sudo crictl images --output json
	I0409 00:24:17.825310   60516 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0409 00:24:17.825387   60516 ssh_runner.go:195] Run: which lz4
	I0409 00:24:17.829156   60516 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0409 00:24:17.833235   60516 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0409 00:24:17.833258   60516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0409 00:24:19.359895   60516 crio.go:462] duration metric: took 1.530742048s to copy over tarball
	I0409 00:24:19.359991   60516 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0409 00:24:21.894804   60516 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.5347802s)
	I0409 00:24:21.894830   60516 crio.go:469] duration metric: took 2.534901308s to extract the tarball
	I0409 00:24:21.894840   60516 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0409 00:24:21.944830   60516 ssh_runner.go:195] Run: sudo crictl images --output json
	I0409 00:24:21.988013   60516 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0409 00:24:21.988038   60516 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0409 00:24:21.988120   60516 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0409 00:24:21.988162   60516 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0409 00:24:21.988195   60516 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0409 00:24:21.988205   60516 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0409 00:24:21.988238   60516 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0409 00:24:21.988172   60516 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0409 00:24:21.988181   60516 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0409 00:24:21.988125   60516 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0409 00:24:21.989468   60516 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0409 00:24:21.989769   60516 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0409 00:24:21.989809   60516 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0409 00:24:21.989814   60516 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0409 00:24:21.989814   60516 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0409 00:24:21.989872   60516 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0409 00:24:21.989881   60516 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0409 00:24:21.989896   60516 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0409 00:24:22.186710   60516 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0409 00:24:22.195731   60516 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0409 00:24:22.226955   60516 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0409 00:24:22.235211   60516 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0409 00:24:22.245700   60516 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0409 00:24:22.247926   60516 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0409 00:24:22.256795   60516 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0409 00:24:22.256849   60516 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0409 00:24:22.256895   60516 ssh_runner.go:195] Run: which crictl
	I0409 00:24:22.269901   60516 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0409 00:24:22.269946   60516 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0409 00:24:22.269993   60516 ssh_runner.go:195] Run: which crictl
	I0409 00:24:22.270328   60516 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0409 00:24:22.305619   60516 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0409 00:24:22.305668   60516 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0409 00:24:22.305717   60516 ssh_runner.go:195] Run: which crictl
	I0409 00:24:22.352146   60516 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0409 00:24:22.352205   60516 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0409 00:24:22.352259   60516 ssh_runner.go:195] Run: which crictl
	I0409 00:24:22.369497   60516 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0409 00:24:22.369531   60516 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0409 00:24:22.369555   60516 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0409 00:24:22.369564   60516 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0409 00:24:22.369600   60516 ssh_runner.go:195] Run: which crictl
	I0409 00:24:22.369605   60516 ssh_runner.go:195] Run: which crictl
	I0409 00:24:22.369654   60516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0409 00:24:22.369666   60516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0409 00:24:22.369657   60516 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0409 00:24:22.369692   60516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0409 00:24:22.369708   60516 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0409 00:24:22.369732   60516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0409 00:24:22.369734   60516 ssh_runner.go:195] Run: which crictl
	I0409 00:24:22.455599   60516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0409 00:24:22.455683   60516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0409 00:24:22.455702   60516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0409 00:24:22.455712   60516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0409 00:24:22.455706   60516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0409 00:24:22.455778   60516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0409 00:24:22.455778   60516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0409 00:24:22.608775   60516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0409 00:24:22.608859   60516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0409 00:24:22.608872   60516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0409 00:24:22.608814   60516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0409 00:24:22.608819   60516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0409 00:24:22.608932   60516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0409 00:24:22.608955   60516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0409 00:24:22.758468   60516 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20501-9125/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0409 00:24:22.758597   60516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0409 00:24:22.758611   60516 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20501-9125/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0409 00:24:22.758663   60516 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20501-9125/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0409 00:24:22.758668   60516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0409 00:24:22.758754   60516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0409 00:24:22.758758   60516 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20501-9125/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0409 00:24:22.811853   60516 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20501-9125/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0409 00:24:22.820312   60516 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20501-9125/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0409 00:24:22.829046   60516 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20501-9125/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0409 00:24:23.299802   60516 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0409 00:24:23.444048   60516 cache_images.go:92] duration metric: took 1.455993748s to LoadCachedImages
	W0409 00:24:23.444145   60516 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20501-9125/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20501-9125/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0409 00:24:23.444161   60516 kubeadm.go:934] updating node { 192.168.50.37 8443 v1.20.0 crio true true} ...
	I0409 00:24:23.444293   60516 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-636554 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.37
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-636554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0409 00:24:23.444379   60516 ssh_runner.go:195] Run: crio config
	I0409 00:24:23.487098   60516 cni.go:84] Creating CNI manager for ""
	I0409 00:24:23.487132   60516 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0409 00:24:23.487147   60516 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0409 00:24:23.487166   60516 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.37 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-636554 NodeName:kubernetes-upgrade-636554 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.37"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.37 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0409 00:24:23.487318   60516 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.37
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-636554"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.37
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.37"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0409 00:24:23.487379   60516 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0409 00:24:23.496848   60516 binaries.go:44] Found k8s binaries, skipping transfer
	I0409 00:24:23.496909   60516 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0409 00:24:23.505797   60516 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0409 00:24:23.520728   60516 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0409 00:24:23.535510   60516 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0409 00:24:23.550507   60516 ssh_runner.go:195] Run: grep 192.168.50.37	control-plane.minikube.internal$ /etc/hosts
	I0409 00:24:23.553985   60516 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.37	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0409 00:24:23.565107   60516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 00:24:23.691757   60516 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0409 00:24:23.710048   60516 certs.go:68] Setting up /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554 for IP: 192.168.50.37
	I0409 00:24:23.710076   60516 certs.go:194] generating shared ca certs ...
	I0409 00:24:23.710096   60516 certs.go:226] acquiring lock for ca certs: {Name:mk0d455aae85017ac942481bbc1202ccedea144f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:24:23.710245   60516 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.key
	I0409 00:24:23.710298   60516 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.key
	I0409 00:24:23.710325   60516 certs.go:256] generating profile certs ...
	I0409 00:24:23.710397   60516 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/client.key
	I0409 00:24:23.710424   60516 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/client.crt with IP's: []
	I0409 00:24:24.027815   60516 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/client.crt ...
	I0409 00:24:24.027846   60516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/client.crt: {Name:mk909320dd9d000d5e4ca709ae49da1713456354 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:24:24.028031   60516 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/client.key ...
	I0409 00:24:24.028050   60516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/client.key: {Name:mk916e55bbdeeff23aba4f439f32a01f6776266e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:24:24.028158   60516 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/apiserver.key.08cdde78
	I0409 00:24:24.028177   60516 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/apiserver.crt.08cdde78 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.37]
	I0409 00:24:24.088915   60516 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/apiserver.crt.08cdde78 ...
	I0409 00:24:24.088945   60516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/apiserver.crt.08cdde78: {Name:mk234290dfacb0048b565e965f6fdd7527df1739 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:24:24.089135   60516 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/apiserver.key.08cdde78 ...
	I0409 00:24:24.089154   60516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/apiserver.key.08cdde78: {Name:mkca63fd046cc6f478429e23448208525e3aa44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:24:24.089262   60516 certs.go:381] copying /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/apiserver.crt.08cdde78 -> /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/apiserver.crt
	I0409 00:24:24.089353   60516 certs.go:385] copying /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/apiserver.key.08cdde78 -> /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/apiserver.key
	I0409 00:24:24.089420   60516 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/proxy-client.key
	I0409 00:24:24.089437   60516 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/proxy-client.crt with IP's: []
	I0409 00:24:24.127301   60516 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/proxy-client.crt ...
	I0409 00:24:24.127328   60516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/proxy-client.crt: {Name:mkf9ee37b36270061f9a8c189fd5df49746bd55f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:24:24.127498   60516 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/proxy-client.key ...
	I0409 00:24:24.127516   60516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/proxy-client.key: {Name:mk738e846edd701dfca67618cd35c491df8ce08b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:24:24.127711   60516 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314.pem (1338 bytes)
	W0409 00:24:24.127747   60516 certs.go:480] ignoring /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314_empty.pem, impossibly tiny 0 bytes
	I0409 00:24:24.127759   60516 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem (1679 bytes)
	I0409 00:24:24.127782   60516 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem (1082 bytes)
	I0409 00:24:24.127804   60516 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem (1123 bytes)
	I0409 00:24:24.127826   60516 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem (1675 bytes)
	I0409 00:24:24.127879   60516 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem (1708 bytes)
	I0409 00:24:24.128467   60516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0409 00:24:24.153074   60516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0409 00:24:24.178878   60516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0409 00:24:24.201691   60516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0409 00:24:24.223009   60516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0409 00:24:24.244515   60516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0409 00:24:24.266025   60516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0409 00:24:24.289184   60516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0409 00:24:24.319992   60516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem --> /usr/share/ca-certificates/163142.pem (1708 bytes)
	I0409 00:24:24.346709   60516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0409 00:24:24.376322   60516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314.pem --> /usr/share/ca-certificates/16314.pem (1338 bytes)
	I0409 00:24:24.404455   60516 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0409 00:24:24.421383   60516 ssh_runner.go:195] Run: openssl version
	I0409 00:24:24.428001   60516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163142.pem && ln -fs /usr/share/ca-certificates/163142.pem /etc/ssl/certs/163142.pem"
	I0409 00:24:24.438871   60516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163142.pem
	I0409 00:24:24.443276   60516 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 22:53 /usr/share/ca-certificates/163142.pem
	I0409 00:24:24.443339   60516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163142.pem
	I0409 00:24:24.449367   60516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163142.pem /etc/ssl/certs/3ec20f2e.0"
	I0409 00:24:24.460606   60516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0409 00:24:24.471859   60516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0409 00:24:24.478486   60516 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 22:46 /usr/share/ca-certificates/minikubeCA.pem
	I0409 00:24:24.478549   60516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0409 00:24:24.486238   60516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0409 00:24:24.496735   60516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16314.pem && ln -fs /usr/share/ca-certificates/16314.pem /etc/ssl/certs/16314.pem"
	I0409 00:24:24.507296   60516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16314.pem
	I0409 00:24:24.512194   60516 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 22:53 /usr/share/ca-certificates/16314.pem
	I0409 00:24:24.512258   60516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16314.pem
	I0409 00:24:24.517938   60516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16314.pem /etc/ssl/certs/51391683.0"
	I0409 00:24:24.533974   60516 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0409 00:24:24.547792   60516 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0409 00:24:24.547860   60516 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-636554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kuberne
tes-upgrade-636554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.37 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0409 00:24:24.547995   60516 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0409 00:24:24.548054   60516 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0409 00:24:24.596374   60516 cri.go:89] found id: ""
	I0409 00:24:24.596462   60516 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0409 00:24:24.607406   60516 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0409 00:24:24.625378   60516 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0409 00:24:24.642760   60516 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0409 00:24:24.642788   60516 kubeadm.go:157] found existing configuration files:
	
	I0409 00:24:24.642847   60516 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0409 00:24:24.660337   60516 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0409 00:24:24.660431   60516 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0409 00:24:24.672053   60516 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0409 00:24:24.681833   60516 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0409 00:24:24.681917   60516 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0409 00:24:24.691554   60516 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0409 00:24:24.700568   60516 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0409 00:24:24.700636   60516 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0409 00:24:24.710186   60516 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0409 00:24:24.719055   60516 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0409 00:24:24.719110   60516 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0409 00:24:24.728208   60516 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0409 00:24:24.865566   60516 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0409 00:24:24.865639   60516 kubeadm.go:310] [preflight] Running pre-flight checks
	I0409 00:24:25.013595   60516 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0409 00:24:25.013754   60516 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0409 00:24:25.013933   60516 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0409 00:24:25.204718   60516 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0409 00:24:25.350072   60516 out.go:235]   - Generating certificates and keys ...
	I0409 00:24:25.350256   60516 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0409 00:24:25.350358   60516 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0409 00:24:25.485670   60516 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0409 00:24:25.688104   60516 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0409 00:24:25.858611   60516 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0409 00:24:25.969663   60516 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0409 00:24:26.043334   60516 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0409 00:24:26.043578   60516 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-636554 localhost] and IPs [192.168.50.37 127.0.0.1 ::1]
	I0409 00:24:26.111770   60516 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0409 00:24:26.112065   60516 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-636554 localhost] and IPs [192.168.50.37 127.0.0.1 ::1]
	I0409 00:24:26.302670   60516 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0409 00:24:26.706219   60516 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0409 00:24:26.863727   60516 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0409 00:24:26.863845   60516 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0409 00:24:27.067326   60516 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0409 00:24:27.356706   60516 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0409 00:24:27.607580   60516 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0409 00:24:27.760786   60516 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0409 00:24:27.775280   60516 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0409 00:24:27.776633   60516 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0409 00:24:27.776719   60516 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0409 00:24:27.906201   60516 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0409 00:24:27.907966   60516 out.go:235]   - Booting up control plane ...
	I0409 00:24:27.908120   60516 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0409 00:24:27.916993   60516 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0409 00:24:27.918388   60516 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0409 00:24:27.919677   60516 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0409 00:24:27.924857   60516 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0409 00:25:07.919496   60516 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0409 00:25:07.920179   60516 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0409 00:25:07.920419   60516 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0409 00:25:12.920913   60516 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0409 00:25:12.921173   60516 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0409 00:25:22.920153   60516 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0409 00:25:22.920428   60516 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0409 00:25:42.919654   60516 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0409 00:25:42.919832   60516 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0409 00:26:22.921939   60516 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0409 00:26:22.922386   60516 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0409 00:26:22.922404   60516 kubeadm.go:310] 
	I0409 00:26:22.922486   60516 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0409 00:26:22.922572   60516 kubeadm.go:310] 		timed out waiting for the condition
	I0409 00:26:22.922579   60516 kubeadm.go:310] 
	I0409 00:26:22.922652   60516 kubeadm.go:310] 	This error is likely caused by:
	I0409 00:26:22.922720   60516 kubeadm.go:310] 		- The kubelet is not running
	I0409 00:26:22.922946   60516 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0409 00:26:22.922958   60516 kubeadm.go:310] 
	I0409 00:26:22.923180   60516 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0409 00:26:22.923255   60516 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0409 00:26:22.923323   60516 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0409 00:26:22.923332   60516 kubeadm.go:310] 
	I0409 00:26:22.923570   60516 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0409 00:26:22.923757   60516 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0409 00:26:22.923770   60516 kubeadm.go:310] 
	I0409 00:26:22.924022   60516 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0409 00:26:22.924218   60516 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0409 00:26:22.924377   60516 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0409 00:26:22.924537   60516 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0409 00:26:22.924544   60516 kubeadm.go:310] 
	I0409 00:26:22.926027   60516 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0409 00:26:22.926176   60516 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0409 00:26:22.926284   60516 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0409 00:26:22.926437   60516 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-636554 localhost] and IPs [192.168.50.37 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-636554 localhost] and IPs [192.168.50.37 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-636554 localhost] and IPs [192.168.50.37 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-636554 localhost] and IPs [192.168.50.37 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0409 00:26:22.926480   60516 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0409 00:26:23.477637   60516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0409 00:26:23.493782   60516 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0409 00:26:23.503818   60516 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0409 00:26:23.503843   60516 kubeadm.go:157] found existing configuration files:
	
	I0409 00:26:23.503934   60516 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0409 00:26:23.518325   60516 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0409 00:26:23.518419   60516 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0409 00:26:23.529734   60516 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0409 00:26:23.542114   60516 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0409 00:26:23.542180   60516 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0409 00:26:23.556019   60516 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0409 00:26:23.570140   60516 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0409 00:26:23.570221   60516 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0409 00:26:23.591945   60516 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0409 00:26:23.616832   60516 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0409 00:26:23.616893   60516 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0409 00:26:23.629632   60516 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0409 00:26:23.717702   60516 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0409 00:26:23.717795   60516 kubeadm.go:310] [preflight] Running pre-flight checks
	I0409 00:26:23.922637   60516 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0409 00:26:23.922821   60516 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0409 00:26:23.922950   60516 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0409 00:26:24.171556   60516 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0409 00:26:24.173509   60516 out.go:235]   - Generating certificates and keys ...
	I0409 00:26:24.173624   60516 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0409 00:26:24.173714   60516 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0409 00:26:24.173821   60516 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0409 00:26:24.173908   60516 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0409 00:26:24.174003   60516 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0409 00:26:24.174075   60516 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0409 00:26:24.174168   60516 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0409 00:26:24.174251   60516 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0409 00:26:24.174343   60516 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0409 00:26:24.174445   60516 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0409 00:26:24.174497   60516 kubeadm.go:310] [certs] Using the existing "sa" key
	I0409 00:26:24.174570   60516 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0409 00:26:24.290767   60516 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0409 00:26:24.591723   60516 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0409 00:26:24.722684   60516 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0409 00:26:24.962242   60516 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0409 00:26:24.982681   60516 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0409 00:26:24.982854   60516 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0409 00:26:24.982932   60516 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0409 00:26:25.164101   60516 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0409 00:26:25.169550   60516 out.go:235]   - Booting up control plane ...
	I0409 00:26:25.169732   60516 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0409 00:26:25.172856   60516 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0409 00:26:25.183090   60516 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0409 00:26:25.185371   60516 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0409 00:26:25.188706   60516 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0409 00:27:05.191164   60516 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0409 00:27:05.191555   60516 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0409 00:27:05.191817   60516 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0409 00:27:10.192502   60516 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0409 00:27:10.192777   60516 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0409 00:27:20.193687   60516 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0409 00:27:20.194037   60516 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0409 00:27:40.192953   60516 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0409 00:27:40.193238   60516 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0409 00:28:20.192581   60516 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0409 00:28:20.192860   60516 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0409 00:28:20.192872   60516 kubeadm.go:310] 
	I0409 00:28:20.192975   60516 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0409 00:28:20.193320   60516 kubeadm.go:310] 		timed out waiting for the condition
	I0409 00:28:20.193342   60516 kubeadm.go:310] 
	I0409 00:28:20.193391   60516 kubeadm.go:310] 	This error is likely caused by:
	I0409 00:28:20.193440   60516 kubeadm.go:310] 		- The kubelet is not running
	I0409 00:28:20.193585   60516 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0409 00:28:20.193600   60516 kubeadm.go:310] 
	I0409 00:28:20.193753   60516 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0409 00:28:20.193808   60516 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0409 00:28:20.193862   60516 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0409 00:28:20.193874   60516 kubeadm.go:310] 
	I0409 00:28:20.194024   60516 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0409 00:28:20.194173   60516 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0409 00:28:20.194194   60516 kubeadm.go:310] 
	I0409 00:28:20.194326   60516 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0409 00:28:20.194425   60516 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0409 00:28:20.194510   60516 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0409 00:28:20.194591   60516 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0409 00:28:20.194599   60516 kubeadm.go:310] 
	I0409 00:28:20.197127   60516 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0409 00:28:20.197233   60516 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0409 00:28:20.197309   60516 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0409 00:28:20.197374   60516 kubeadm.go:394] duration metric: took 3m55.649520132s to StartCluster
	I0409 00:28:20.197427   60516 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0409 00:28:20.197501   60516 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0409 00:28:20.249118   60516 cri.go:89] found id: ""
	I0409 00:28:20.249147   60516 logs.go:282] 0 containers: []
	W0409 00:28:20.249158   60516 logs.go:284] No container was found matching "kube-apiserver"
	I0409 00:28:20.249180   60516 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0409 00:28:20.249243   60516 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0409 00:28:20.298542   60516 cri.go:89] found id: ""
	I0409 00:28:20.298570   60516 logs.go:282] 0 containers: []
	W0409 00:28:20.298581   60516 logs.go:284] No container was found matching "etcd"
	I0409 00:28:20.298587   60516 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0409 00:28:20.298646   60516 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0409 00:28:20.333925   60516 cri.go:89] found id: ""
	I0409 00:28:20.333953   60516 logs.go:282] 0 containers: []
	W0409 00:28:20.333960   60516 logs.go:284] No container was found matching "coredns"
	I0409 00:28:20.333966   60516 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0409 00:28:20.334014   60516 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0409 00:28:20.372920   60516 cri.go:89] found id: ""
	I0409 00:28:20.372954   60516 logs.go:282] 0 containers: []
	W0409 00:28:20.372966   60516 logs.go:284] No container was found matching "kube-scheduler"
	I0409 00:28:20.372974   60516 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0409 00:28:20.373041   60516 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0409 00:28:20.416673   60516 cri.go:89] found id: ""
	I0409 00:28:20.416701   60516 logs.go:282] 0 containers: []
	W0409 00:28:20.416712   60516 logs.go:284] No container was found matching "kube-proxy"
	I0409 00:28:20.416719   60516 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0409 00:28:20.416786   60516 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0409 00:28:20.461646   60516 cri.go:89] found id: ""
	I0409 00:28:20.461667   60516 logs.go:282] 0 containers: []
	W0409 00:28:20.461673   60516 logs.go:284] No container was found matching "kube-controller-manager"
	I0409 00:28:20.461680   60516 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0409 00:28:20.461728   60516 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0409 00:28:20.503586   60516 cri.go:89] found id: ""
	I0409 00:28:20.503607   60516 logs.go:282] 0 containers: []
	W0409 00:28:20.503615   60516 logs.go:284] No container was found matching "kindnet"
	I0409 00:28:20.503625   60516 logs.go:123] Gathering logs for dmesg ...
	I0409 00:28:20.503639   60516 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0409 00:28:20.517311   60516 logs.go:123] Gathering logs for describe nodes ...
	I0409 00:28:20.517345   60516 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0409 00:28:20.656840   60516 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0409 00:28:20.656865   60516 logs.go:123] Gathering logs for CRI-O ...
	I0409 00:28:20.656878   60516 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0409 00:28:20.802037   60516 logs.go:123] Gathering logs for container status ...
	I0409 00:28:20.802071   60516 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0409 00:28:20.845357   60516 logs.go:123] Gathering logs for kubelet ...
	I0409 00:28:20.845394   60516 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0409 00:28:20.903725   60516 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0409 00:28:20.903788   60516 out.go:270] * 
	* 
	W0409 00:28:20.903858   60516 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0409 00:28:20.903898   60516 out.go:270] * 
	* 
	W0409 00:28:20.904892   60516 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0409 00:28:20.907997   60516 out.go:201] 
	W0409 00:28:20.909049   60516 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0409 00:28:20.909096   60516 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0409 00:28:20.909114   60516 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0409 00:28:20.910249   60516 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-636554 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-636554
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-636554: (1.433326176s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-636554 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-636554 status --format={{.Host}}: exit status 7 (83.128865ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-636554 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-636554 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.551025902s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-636554 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-636554 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-636554 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (84.154284ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-636554] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20501
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20501-9125/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20501-9125/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-636554
	    minikube start -p kubernetes-upgrade-636554 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6365542 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-636554 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-636554 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-636554 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (38.294470932s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-04-09 00:29:44.493935403 +0000 UTC m=+6279.398974531
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-636554 -n kubernetes-upgrade-636554
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-636554 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-636554 logs -n 25: (1.767637872s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-459514                             | custom-flannel-459514     | jenkins | v1.35.0 | 09 Apr 25 00:29 UTC | 09 Apr 25 00:29 UTC |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-459514 sudo                        | custom-flannel-459514     | jenkins | v1.35.0 | 09 Apr 25 00:29 UTC | 09 Apr 25 00:29 UTC |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-459514                             | custom-flannel-459514     | jenkins | v1.35.0 | 09 Apr 25 00:29 UTC | 09 Apr 25 00:29 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-459514                             | custom-flannel-459514     | jenkins | v1.35.0 | 09 Apr 25 00:29 UTC | 09 Apr 25 00:29 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-459514 sudo                        | custom-flannel-459514     | jenkins | v1.35.0 | 09 Apr 25 00:29 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-459514                             | custom-flannel-459514     | jenkins | v1.35.0 | 09 Apr 25 00:29 UTC | 09 Apr 25 00:29 UTC |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-459514 sudo                        | custom-flannel-459514     | jenkins | v1.35.0 | 09 Apr 25 00:29 UTC | 09 Apr 25 00:29 UTC |
	|         | cat /etc/docker/daemon.json                          |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-459514 sudo                        | custom-flannel-459514     | jenkins | v1.35.0 | 09 Apr 25 00:29 UTC |                     |
	|         | docker system info                                   |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-459514 sudo                        | custom-flannel-459514     | jenkins | v1.35.0 | 09 Apr 25 00:29 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-459514                             | custom-flannel-459514     | jenkins | v1.35.0 | 09 Apr 25 00:29 UTC | 09 Apr 25 00:29 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-459514 sudo cat                    | custom-flannel-459514     | jenkins | v1.35.0 | 09 Apr 25 00:29 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-459514 sudo cat                    | custom-flannel-459514     | jenkins | v1.35.0 | 09 Apr 25 00:29 UTC | 09 Apr 25 00:29 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-459514 sudo                        | custom-flannel-459514     | jenkins | v1.35.0 | 09 Apr 25 00:29 UTC | 09 Apr 25 00:29 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-459514 sudo                        | custom-flannel-459514     | jenkins | v1.35.0 | 09 Apr 25 00:29 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-459514                             | custom-flannel-459514     | jenkins | v1.35.0 | 09 Apr 25 00:29 UTC | 09 Apr 25 00:29 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-459514 sudo cat                    | custom-flannel-459514     | jenkins | v1.35.0 | 09 Apr 25 00:29 UTC | 09 Apr 25 00:29 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-459514                             | custom-flannel-459514     | jenkins | v1.35.0 | 09 Apr 25 00:29 UTC | 09 Apr 25 00:29 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-459514 sudo                        | custom-flannel-459514     | jenkins | v1.35.0 | 09 Apr 25 00:29 UTC | 09 Apr 25 00:29 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-459514 sudo                        | custom-flannel-459514     | jenkins | v1.35.0 | 09 Apr 25 00:29 UTC | 09 Apr 25 00:29 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-459514 sudo                        | custom-flannel-459514     | jenkins | v1.35.0 | 09 Apr 25 00:29 UTC | 09 Apr 25 00:29 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-459514 sudo                        | custom-flannel-459514     | jenkins | v1.35.0 | 09 Apr 25 00:29 UTC | 09 Apr 25 00:29 UTC |
	|         | find /etc/crio -type f -exec                         |                           |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-459514 sudo                        | custom-flannel-459514     | jenkins | v1.35.0 | 09 Apr 25 00:29 UTC | 09 Apr 25 00:29 UTC |
	|         | crio config                                          |                           |         |         |                     |                     |
	| delete  | -p custom-flannel-459514                             | custom-flannel-459514     | jenkins | v1.35.0 | 09 Apr 25 00:29 UTC | 09 Apr 25 00:29 UTC |
	| start   | -p bridge-459514 --memory=3072                       | bridge-459514             | jenkins | v1.35.0 | 09 Apr 25 00:29 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=bridge --driver=kvm2                           |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-459514                         | enable-default-cni-459514 | jenkins | v1.35.0 | 09 Apr 25 00:29 UTC | 09 Apr 25 00:29 UTC |
	|         | pgrep -a kubelet                                     |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/09 00:29:13
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0409 00:29:13.111996   70164 out.go:345] Setting OutFile to fd 1 ...
	I0409 00:29:13.112276   70164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0409 00:29:13.112287   70164 out.go:358] Setting ErrFile to fd 2...
	I0409 00:29:13.112291   70164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0409 00:29:13.112480   70164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20501-9125/.minikube/bin
	I0409 00:29:13.113053   70164 out.go:352] Setting JSON to false
	I0409 00:29:13.114158   70164 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7898,"bootTime":1744150655,"procs":311,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0409 00:29:13.114215   70164 start.go:139] virtualization: kvm guest
	I0409 00:29:13.116210   70164 out.go:177] * [bridge-459514] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0409 00:29:13.117412   70164 notify.go:220] Checking for updates...
	I0409 00:29:13.117470   70164 out.go:177]   - MINIKUBE_LOCATION=20501
	I0409 00:29:13.118724   70164 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0409 00:29:13.120101   70164 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20501-9125/kubeconfig
	I0409 00:29:13.121693   70164 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20501-9125/.minikube
	I0409 00:29:13.123674   70164 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0409 00:29:13.124879   70164 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0409 00:29:13.126319   70164 config.go:182] Loaded profile config "enable-default-cni-459514": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0409 00:29:13.126399   70164 config.go:182] Loaded profile config "flannel-459514": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0409 00:29:13.126508   70164 config.go:182] Loaded profile config "kubernetes-upgrade-636554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0409 00:29:13.126610   70164 driver.go:404] Setting default libvirt URI to qemu:///system
	I0409 00:29:13.164827   70164 out.go:177] * Using the kvm2 driver based on user configuration
	I0409 00:29:13.166087   70164 start.go:297] selected driver: kvm2
	I0409 00:29:13.166102   70164 start.go:901] validating driver "kvm2" against <nil>
	I0409 00:29:13.166120   70164 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0409 00:29:13.167096   70164 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0409 00:29:13.167184   70164 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20501-9125/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0409 00:29:13.184634   70164 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0409 00:29:13.184677   70164 start_flags.go:311] no existing cluster config was found, will generate one from the flags 
	I0409 00:29:13.184907   70164 start_flags.go:975] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0409 00:29:13.184936   70164 cni.go:84] Creating CNI manager for "bridge"
	I0409 00:29:13.184941   70164 start_flags.go:320] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0409 00:29:13.185001   70164 start.go:340] cluster config:
	{Name:bridge-459514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-459514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0409 00:29:13.185095   70164 iso.go:125] acquiring lock: {Name:mk618477bad490b102618c53c9c8c6b34f33ce81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0409 00:29:13.186824   70164 out.go:177] * Starting "bridge-459514" primary control-plane node in "bridge-459514" cluster
	I0409 00:29:15.108719   69188 start.go:364] duration metric: took 8.735432196s to acquireMachinesLock for "kubernetes-upgrade-636554"
	I0409 00:29:15.108787   69188 start.go:96] Skipping create...Using existing machine configuration
	I0409 00:29:15.108798   69188 fix.go:54] fixHost starting: 
	I0409 00:29:15.109230   69188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0409 00:29:15.109281   69188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0409 00:29:15.126294   69188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33329
	I0409 00:29:15.126665   69188 main.go:141] libmachine: () Calling .GetVersion
	I0409 00:29:15.127080   69188 main.go:141] libmachine: Using API Version  1
	I0409 00:29:15.127103   69188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0409 00:29:15.127420   69188 main.go:141] libmachine: () Calling .GetMachineName
	I0409 00:29:15.127670   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .DriverName
	I0409 00:29:15.127830   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetState
	I0409 00:29:15.129272   69188 fix.go:112] recreateIfNeeded on kubernetes-upgrade-636554: state=Running err=<nil>
	W0409 00:29:15.129291   69188 fix.go:138] unexpected machine state, will restart: <nil>
	I0409 00:29:15.131321   69188 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-636554" VM ...
	I0409 00:29:12.420766   66178 pod_ready.go:103] pod "coredns-668d6bf9bc-xgt7n" in "kube-system" namespace has status "Ready":"False"
	I0409 00:29:14.919960   66178 pod_ready.go:103] pod "coredns-668d6bf9bc-xgt7n" in "kube-system" namespace has status "Ready":"False"
	I0409 00:29:13.649828   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:13.650343   68181 main.go:141] libmachine: (flannel-459514) found domain IP: 192.168.72.137
	I0409 00:29:13.650384   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has current primary IP address 192.168.72.137 and MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:13.650398   68181 main.go:141] libmachine: (flannel-459514) reserving static IP address...
	I0409 00:29:13.650756   68181 main.go:141] libmachine: (flannel-459514) DBG | unable to find host DHCP lease matching {name: "flannel-459514", mac: "52:54:00:b0:26:17", ip: "192.168.72.137"} in network mk-flannel-459514
	I0409 00:29:13.726963   68181 main.go:141] libmachine: (flannel-459514) DBG | Getting to WaitForSSH function...
	I0409 00:29:13.726992   68181 main.go:141] libmachine: (flannel-459514) reserved static IP address 192.168.72.137 for domain flannel-459514
	I0409 00:29:13.727050   68181 main.go:141] libmachine: (flannel-459514) waiting for SSH...
	I0409 00:29:13.729761   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:13.730247   68181 main.go:141] libmachine: (flannel-459514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:26:17", ip: ""} in network mk-flannel-459514: {Iface:virbr4 ExpiryTime:2025-04-09 01:29:03 +0000 UTC Type:0 Mac:52:54:00:b0:26:17 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b0:26:17}
	I0409 00:29:13.730272   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined IP address 192.168.72.137 and MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:13.730421   68181 main.go:141] libmachine: (flannel-459514) DBG | Using SSH client type: external
	I0409 00:29:13.730446   68181 main.go:141] libmachine: (flannel-459514) DBG | Using SSH private key: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/flannel-459514/id_rsa (-rw-------)
	I0409 00:29:13.730487   68181 main.go:141] libmachine: (flannel-459514) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20501-9125/.minikube/machines/flannel-459514/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0409 00:29:13.730501   68181 main.go:141] libmachine: (flannel-459514) DBG | About to run SSH command:
	I0409 00:29:13.730512   68181 main.go:141] libmachine: (flannel-459514) DBG | exit 0
	I0409 00:29:13.859721   68181 main.go:141] libmachine: (flannel-459514) DBG | SSH cmd err, output: <nil>: 
	I0409 00:29:13.860039   68181 main.go:141] libmachine: (flannel-459514) KVM machine creation complete
	I0409 00:29:13.860318   68181 main.go:141] libmachine: (flannel-459514) Calling .GetConfigRaw
	I0409 00:29:13.860808   68181 main.go:141] libmachine: (flannel-459514) Calling .DriverName
	I0409 00:29:13.861043   68181 main.go:141] libmachine: (flannel-459514) Calling .DriverName
	I0409 00:29:13.861205   68181 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0409 00:29:13.861218   68181 main.go:141] libmachine: (flannel-459514) Calling .GetState
	I0409 00:29:13.862626   68181 main.go:141] libmachine: Detecting operating system of created instance...
	I0409 00:29:13.862640   68181 main.go:141] libmachine: Waiting for SSH to be available...
	I0409 00:29:13.862645   68181 main.go:141] libmachine: Getting to WaitForSSH function...
	I0409 00:29:13.862668   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHHostname
	I0409 00:29:13.865125   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:13.865483   68181 main.go:141] libmachine: (flannel-459514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:26:17", ip: ""} in network mk-flannel-459514: {Iface:virbr4 ExpiryTime:2025-04-09 01:29:03 +0000 UTC Type:0 Mac:52:54:00:b0:26:17 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:flannel-459514 Clientid:01:52:54:00:b0:26:17}
	I0409 00:29:13.865515   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined IP address 192.168.72.137 and MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:13.865683   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHPort
	I0409 00:29:13.865867   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHKeyPath
	I0409 00:29:13.866019   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHKeyPath
	I0409 00:29:13.866130   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHUsername
	I0409 00:29:13.866241   68181 main.go:141] libmachine: Using SSH client type: native
	I0409 00:29:13.866434   68181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0409 00:29:13.866442   68181 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0409 00:29:13.970968   68181 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0409 00:29:13.971005   68181 main.go:141] libmachine: Detecting the provisioner...
	I0409 00:29:13.971014   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHHostname
	I0409 00:29:13.973855   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:13.974217   68181 main.go:141] libmachine: (flannel-459514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:26:17", ip: ""} in network mk-flannel-459514: {Iface:virbr4 ExpiryTime:2025-04-09 01:29:03 +0000 UTC Type:0 Mac:52:54:00:b0:26:17 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:flannel-459514 Clientid:01:52:54:00:b0:26:17}
	I0409 00:29:13.974239   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined IP address 192.168.72.137 and MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:13.974461   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHPort
	I0409 00:29:13.974633   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHKeyPath
	I0409 00:29:13.974780   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHKeyPath
	I0409 00:29:13.974926   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHUsername
	I0409 00:29:13.975103   68181 main.go:141] libmachine: Using SSH client type: native
	I0409 00:29:13.975309   68181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0409 00:29:13.975322   68181 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0409 00:29:14.080268   68181 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0409 00:29:14.080326   68181 main.go:141] libmachine: found compatible host: buildroot
	I0409 00:29:14.080336   68181 main.go:141] libmachine: Provisioning with buildroot...
	I0409 00:29:14.080345   68181 main.go:141] libmachine: (flannel-459514) Calling .GetMachineName
	I0409 00:29:14.080578   68181 buildroot.go:166] provisioning hostname "flannel-459514"
	I0409 00:29:14.080593   68181 main.go:141] libmachine: (flannel-459514) Calling .GetMachineName
	I0409 00:29:14.080757   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHHostname
	I0409 00:29:14.083489   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:14.083844   68181 main.go:141] libmachine: (flannel-459514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:26:17", ip: ""} in network mk-flannel-459514: {Iface:virbr4 ExpiryTime:2025-04-09 01:29:03 +0000 UTC Type:0 Mac:52:54:00:b0:26:17 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:flannel-459514 Clientid:01:52:54:00:b0:26:17}
	I0409 00:29:14.083888   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined IP address 192.168.72.137 and MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:14.084028   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHPort
	I0409 00:29:14.084221   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHKeyPath
	I0409 00:29:14.084366   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHKeyPath
	I0409 00:29:14.084482   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHUsername
	I0409 00:29:14.084640   68181 main.go:141] libmachine: Using SSH client type: native
	I0409 00:29:14.084880   68181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0409 00:29:14.084893   68181 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-459514 && echo "flannel-459514" | sudo tee /etc/hostname
	I0409 00:29:14.201275   68181 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-459514
	
	I0409 00:29:14.201308   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHHostname
	I0409 00:29:14.204329   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:14.204755   68181 main.go:141] libmachine: (flannel-459514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:26:17", ip: ""} in network mk-flannel-459514: {Iface:virbr4 ExpiryTime:2025-04-09 01:29:03 +0000 UTC Type:0 Mac:52:54:00:b0:26:17 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:flannel-459514 Clientid:01:52:54:00:b0:26:17}
	I0409 00:29:14.204786   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined IP address 192.168.72.137 and MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:14.204981   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHPort
	I0409 00:29:14.205176   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHKeyPath
	I0409 00:29:14.205353   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHKeyPath
	I0409 00:29:14.205460   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHUsername
	I0409 00:29:14.205597   68181 main.go:141] libmachine: Using SSH client type: native
	I0409 00:29:14.205834   68181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0409 00:29:14.205851   68181 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-459514' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-459514/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-459514' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0409 00:29:14.324882   68181 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0409 00:29:14.324908   68181 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20501-9125/.minikube CaCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20501-9125/.minikube}
	I0409 00:29:14.324929   68181 buildroot.go:174] setting up certificates
	I0409 00:29:14.324941   68181 provision.go:84] configureAuth start
	I0409 00:29:14.324953   68181 main.go:141] libmachine: (flannel-459514) Calling .GetMachineName
	I0409 00:29:14.325255   68181 main.go:141] libmachine: (flannel-459514) Calling .GetIP
	I0409 00:29:14.328093   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:14.328498   68181 main.go:141] libmachine: (flannel-459514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:26:17", ip: ""} in network mk-flannel-459514: {Iface:virbr4 ExpiryTime:2025-04-09 01:29:03 +0000 UTC Type:0 Mac:52:54:00:b0:26:17 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:flannel-459514 Clientid:01:52:54:00:b0:26:17}
	I0409 00:29:14.328519   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined IP address 192.168.72.137 and MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:14.328695   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHHostname
	I0409 00:29:14.330965   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:14.331291   68181 main.go:141] libmachine: (flannel-459514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:26:17", ip: ""} in network mk-flannel-459514: {Iface:virbr4 ExpiryTime:2025-04-09 01:29:03 +0000 UTC Type:0 Mac:52:54:00:b0:26:17 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:flannel-459514 Clientid:01:52:54:00:b0:26:17}
	I0409 00:29:14.331313   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined IP address 192.168.72.137 and MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:14.331442   68181 provision.go:143] copyHostCerts
	I0409 00:29:14.331497   68181 exec_runner.go:144] found /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem, removing ...
	I0409 00:29:14.331530   68181 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem
	I0409 00:29:14.331587   68181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem (1082 bytes)
	I0409 00:29:14.331682   68181 exec_runner.go:144] found /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem, removing ...
	I0409 00:29:14.331691   68181 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem
	I0409 00:29:14.331710   68181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem (1123 bytes)
	I0409 00:29:14.331774   68181 exec_runner.go:144] found /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem, removing ...
	I0409 00:29:14.331781   68181 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem
	I0409 00:29:14.331797   68181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem (1675 bytes)
	I0409 00:29:14.331923   68181 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem org=jenkins.flannel-459514 san=[127.0.0.1 192.168.72.137 flannel-459514 localhost minikube]
	I0409 00:29:14.484063   68181 provision.go:177] copyRemoteCerts
	I0409 00:29:14.484122   68181 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0409 00:29:14.484144   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHHostname
	I0409 00:29:14.487394   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:14.487930   68181 main.go:141] libmachine: (flannel-459514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:26:17", ip: ""} in network mk-flannel-459514: {Iface:virbr4 ExpiryTime:2025-04-09 01:29:03 +0000 UTC Type:0 Mac:52:54:00:b0:26:17 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:flannel-459514 Clientid:01:52:54:00:b0:26:17}
	I0409 00:29:14.487962   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined IP address 192.168.72.137 and MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:14.488320   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHPort
	I0409 00:29:14.488535   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHKeyPath
	I0409 00:29:14.488720   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHUsername
	I0409 00:29:14.488872   68181 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/flannel-459514/id_rsa Username:docker}
	I0409 00:29:14.577708   68181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0409 00:29:14.601096   68181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0409 00:29:14.623835   68181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0409 00:29:14.645080   68181 provision.go:87] duration metric: took 320.126443ms to configureAuth
	I0409 00:29:14.645110   68181 buildroot.go:189] setting minikube options for container-runtime
	I0409 00:29:14.645268   68181 config.go:182] Loaded profile config "flannel-459514": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0409 00:29:14.645330   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHHostname
	I0409 00:29:14.647807   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:14.648159   68181 main.go:141] libmachine: (flannel-459514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:26:17", ip: ""} in network mk-flannel-459514: {Iface:virbr4 ExpiryTime:2025-04-09 01:29:03 +0000 UTC Type:0 Mac:52:54:00:b0:26:17 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:flannel-459514 Clientid:01:52:54:00:b0:26:17}
	I0409 00:29:14.648190   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined IP address 192.168.72.137 and MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:14.648395   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHPort
	I0409 00:29:14.648592   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHKeyPath
	I0409 00:29:14.648773   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHKeyPath
	I0409 00:29:14.648902   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHUsername
	I0409 00:29:14.649110   68181 main.go:141] libmachine: Using SSH client type: native
	I0409 00:29:14.649290   68181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0409 00:29:14.649304   68181 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0409 00:29:14.870033   68181 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0409 00:29:14.870060   68181 main.go:141] libmachine: Checking connection to Docker...
	I0409 00:29:14.870067   68181 main.go:141] libmachine: (flannel-459514) Calling .GetURL
	I0409 00:29:14.871347   68181 main.go:141] libmachine: (flannel-459514) DBG | using libvirt version 6000000
	I0409 00:29:14.873533   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:14.873953   68181 main.go:141] libmachine: (flannel-459514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:26:17", ip: ""} in network mk-flannel-459514: {Iface:virbr4 ExpiryTime:2025-04-09 01:29:03 +0000 UTC Type:0 Mac:52:54:00:b0:26:17 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:flannel-459514 Clientid:01:52:54:00:b0:26:17}
	I0409 00:29:14.874001   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined IP address 192.168.72.137 and MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:14.874203   68181 main.go:141] libmachine: Docker is up and running!
	I0409 00:29:14.874224   68181 main.go:141] libmachine: Reticulating splines...
	I0409 00:29:14.874232   68181 client.go:171] duration metric: took 27.643078326s to LocalClient.Create
	I0409 00:29:14.874263   68181 start.go:167] duration metric: took 27.643150796s to libmachine.API.Create "flannel-459514"
	I0409 00:29:14.874275   68181 start.go:293] postStartSetup for "flannel-459514" (driver="kvm2")
	I0409 00:29:14.874304   68181 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0409 00:29:14.874328   68181 main.go:141] libmachine: (flannel-459514) Calling .DriverName
	I0409 00:29:14.874607   68181 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0409 00:29:14.874632   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHHostname
	I0409 00:29:14.876894   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:14.877247   68181 main.go:141] libmachine: (flannel-459514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:26:17", ip: ""} in network mk-flannel-459514: {Iface:virbr4 ExpiryTime:2025-04-09 01:29:03 +0000 UTC Type:0 Mac:52:54:00:b0:26:17 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:flannel-459514 Clientid:01:52:54:00:b0:26:17}
	I0409 00:29:14.877281   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined IP address 192.168.72.137 and MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:14.877501   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHPort
	I0409 00:29:14.877673   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHKeyPath
	I0409 00:29:14.877844   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHUsername
	I0409 00:29:14.877997   68181 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/flannel-459514/id_rsa Username:docker}
	I0409 00:29:14.961725   68181 ssh_runner.go:195] Run: cat /etc/os-release
	I0409 00:29:14.965440   68181 info.go:137] Remote host: Buildroot 2023.02.9
	I0409 00:29:14.965462   68181 filesync.go:126] Scanning /home/jenkins/minikube-integration/20501-9125/.minikube/addons for local assets ...
	I0409 00:29:14.965535   68181 filesync.go:126] Scanning /home/jenkins/minikube-integration/20501-9125/.minikube/files for local assets ...
	I0409 00:29:14.965635   68181 filesync.go:149] local asset: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem -> 163142.pem in /etc/ssl/certs
	I0409 00:29:14.965720   68181 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0409 00:29:14.974762   68181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem --> /etc/ssl/certs/163142.pem (1708 bytes)
	I0409 00:29:14.996475   68181 start.go:296] duration metric: took 122.187379ms for postStartSetup
	I0409 00:29:14.996525   68181 main.go:141] libmachine: (flannel-459514) Calling .GetConfigRaw
	I0409 00:29:14.997058   68181 main.go:141] libmachine: (flannel-459514) Calling .GetIP
	I0409 00:29:14.999695   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:15.000147   68181 main.go:141] libmachine: (flannel-459514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:26:17", ip: ""} in network mk-flannel-459514: {Iface:virbr4 ExpiryTime:2025-04-09 01:29:03 +0000 UTC Type:0 Mac:52:54:00:b0:26:17 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:flannel-459514 Clientid:01:52:54:00:b0:26:17}
	I0409 00:29:15.000184   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined IP address 192.168.72.137 and MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:15.000317   68181 profile.go:143] Saving config to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/flannel-459514/config.json ...
	I0409 00:29:15.000486   68181 start.go:128] duration metric: took 27.791643315s to createHost
	I0409 00:29:15.000506   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHHostname
	I0409 00:29:15.002910   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:15.003298   68181 main.go:141] libmachine: (flannel-459514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:26:17", ip: ""} in network mk-flannel-459514: {Iface:virbr4 ExpiryTime:2025-04-09 01:29:03 +0000 UTC Type:0 Mac:52:54:00:b0:26:17 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:flannel-459514 Clientid:01:52:54:00:b0:26:17}
	I0409 00:29:15.003337   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined IP address 192.168.72.137 and MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:15.003477   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHPort
	I0409 00:29:15.003675   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHKeyPath
	I0409 00:29:15.003848   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHKeyPath
	I0409 00:29:15.004001   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHUsername
	I0409 00:29:15.004166   68181 main.go:141] libmachine: Using SSH client type: native
	I0409 00:29:15.004418   68181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0409 00:29:15.004430   68181 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0409 00:29:15.108565   68181 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744158555.086174992
	
	I0409 00:29:15.108585   68181 fix.go:216] guest clock: 1744158555.086174992
	I0409 00:29:15.108594   68181 fix.go:229] Guest: 2025-04-09 00:29:15.086174992 +0000 UTC Remote: 2025-04-09 00:29:15.000495743 +0000 UTC m=+48.996510963 (delta=85.679249ms)
	I0409 00:29:15.108617   68181 fix.go:200] guest clock delta is within tolerance: 85.679249ms
	I0409 00:29:15.108623   68181 start.go:83] releasing machines lock for "flannel-459514", held for 27.899988113s
	I0409 00:29:15.108651   68181 main.go:141] libmachine: (flannel-459514) Calling .DriverName
	I0409 00:29:15.108935   68181 main.go:141] libmachine: (flannel-459514) Calling .GetIP
	I0409 00:29:15.111771   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:15.112246   68181 main.go:141] libmachine: (flannel-459514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:26:17", ip: ""} in network mk-flannel-459514: {Iface:virbr4 ExpiryTime:2025-04-09 01:29:03 +0000 UTC Type:0 Mac:52:54:00:b0:26:17 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:flannel-459514 Clientid:01:52:54:00:b0:26:17}
	I0409 00:29:15.112276   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined IP address 192.168.72.137 and MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:15.112458   68181 main.go:141] libmachine: (flannel-459514) Calling .DriverName
	I0409 00:29:15.112972   68181 main.go:141] libmachine: (flannel-459514) Calling .DriverName
	I0409 00:29:15.113167   68181 main.go:141] libmachine: (flannel-459514) Calling .DriverName
	I0409 00:29:15.113263   68181 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0409 00:29:15.113304   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHHostname
	I0409 00:29:15.113380   68181 ssh_runner.go:195] Run: cat /version.json
	I0409 00:29:15.113397   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHHostname
	I0409 00:29:15.115733   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:15.116112   68181 main.go:141] libmachine: (flannel-459514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:26:17", ip: ""} in network mk-flannel-459514: {Iface:virbr4 ExpiryTime:2025-04-09 01:29:03 +0000 UTC Type:0 Mac:52:54:00:b0:26:17 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:flannel-459514 Clientid:01:52:54:00:b0:26:17}
	I0409 00:29:15.116140   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined IP address 192.168.72.137 and MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:15.116293   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHPort
	I0409 00:29:15.116302   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:15.116460   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHKeyPath
	I0409 00:29:15.116586   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHUsername
	I0409 00:29:15.116732   68181 main.go:141] libmachine: (flannel-459514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:26:17", ip: ""} in network mk-flannel-459514: {Iface:virbr4 ExpiryTime:2025-04-09 01:29:03 +0000 UTC Type:0 Mac:52:54:00:b0:26:17 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:flannel-459514 Clientid:01:52:54:00:b0:26:17}
	I0409 00:29:15.116751   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined IP address 192.168.72.137 and MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:15.116742   68181 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/flannel-459514/id_rsa Username:docker}
	I0409 00:29:15.116911   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHPort
	I0409 00:29:15.117060   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHKeyPath
	I0409 00:29:15.117177   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHUsername
	I0409 00:29:15.117296   68181 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/flannel-459514/id_rsa Username:docker}
	I0409 00:29:15.237148   68181 ssh_runner.go:195] Run: systemctl --version
	I0409 00:29:15.243409   68181 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0409 00:29:15.405593   68181 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0409 00:29:15.411335   68181 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0409 00:29:15.411400   68181 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0409 00:29:15.428781   68181 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0409 00:29:15.428800   68181 start.go:495] detecting cgroup driver to use...
	I0409 00:29:15.428862   68181 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0409 00:29:15.445580   68181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0409 00:29:15.460550   68181 docker.go:217] disabling cri-docker service (if available) ...
	I0409 00:29:15.460620   68181 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0409 00:29:15.475041   68181 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0409 00:29:15.488174   68181 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0409 00:29:15.601426   68181 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0409 00:29:15.738321   68181 docker.go:233] disabling docker service ...
	I0409 00:29:15.738402   68181 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0409 00:29:15.752837   68181 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0409 00:29:15.765714   68181 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0409 00:29:15.893036   68181 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0409 00:29:16.014129   68181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0409 00:29:16.027859   68181 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0409 00:29:15.132665   69188 machine.go:93] provisionDockerMachine start ...
	I0409 00:29:15.132684   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .DriverName
	I0409 00:29:15.132854   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHHostname
	I0409 00:29:15.135342   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:29:15.135814   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:e7:ae", ip: ""} in network mk-kubernetes-upgrade-636554: {Iface:virbr2 ExpiryTime:2025-04-09 01:28:39 +0000 UTC Type:0 Mac:52:54:00:42:e7:ae Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:kubernetes-upgrade-636554 Clientid:01:52:54:00:42:e7:ae}
	I0409 00:29:15.135857   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined IP address 192.168.50.37 and MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:29:15.136043   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHPort
	I0409 00:29:15.136212   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHKeyPath
	I0409 00:29:15.136426   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHKeyPath
	I0409 00:29:15.136583   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHUsername
	I0409 00:29:15.136759   69188 main.go:141] libmachine: Using SSH client type: native
	I0409 00:29:15.137032   69188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.37 22 <nil> <nil>}
	I0409 00:29:15.137045   69188 main.go:141] libmachine: About to run SSH command:
	hostname
	I0409 00:29:15.243973   69188 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-636554
	
	I0409 00:29:15.244000   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetMachineName
	I0409 00:29:15.244229   69188 buildroot.go:166] provisioning hostname "kubernetes-upgrade-636554"
	I0409 00:29:15.244258   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetMachineName
	I0409 00:29:15.244449   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHHostname
	I0409 00:29:15.247512   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:29:15.247938   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:e7:ae", ip: ""} in network mk-kubernetes-upgrade-636554: {Iface:virbr2 ExpiryTime:2025-04-09 01:28:39 +0000 UTC Type:0 Mac:52:54:00:42:e7:ae Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:kubernetes-upgrade-636554 Clientid:01:52:54:00:42:e7:ae}
	I0409 00:29:15.247968   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined IP address 192.168.50.37 and MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:29:15.248078   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHPort
	I0409 00:29:15.248244   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHKeyPath
	I0409 00:29:15.248406   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHKeyPath
	I0409 00:29:15.248549   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHUsername
	I0409 00:29:15.248741   69188 main.go:141] libmachine: Using SSH client type: native
	I0409 00:29:15.249006   69188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.37 22 <nil> <nil>}
	I0409 00:29:15.249025   69188 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-636554 && echo "kubernetes-upgrade-636554" | sudo tee /etc/hostname
	I0409 00:29:15.369521   69188 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-636554
	
	I0409 00:29:15.369552   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHHostname
	I0409 00:29:15.372072   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:29:15.372398   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:e7:ae", ip: ""} in network mk-kubernetes-upgrade-636554: {Iface:virbr2 ExpiryTime:2025-04-09 01:28:39 +0000 UTC Type:0 Mac:52:54:00:42:e7:ae Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:kubernetes-upgrade-636554 Clientid:01:52:54:00:42:e7:ae}
	I0409 00:29:15.372427   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined IP address 192.168.50.37 and MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:29:15.372647   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHPort
	I0409 00:29:15.372807   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHKeyPath
	I0409 00:29:15.372955   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHKeyPath
	I0409 00:29:15.373115   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHUsername
	I0409 00:29:15.373257   69188 main.go:141] libmachine: Using SSH client type: native
	I0409 00:29:15.373512   69188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.37 22 <nil> <nil>}
	I0409 00:29:15.373538   69188 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-636554' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-636554/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-636554' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0409 00:29:15.489478   69188 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0409 00:29:15.489504   69188 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20501-9125/.minikube CaCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20501-9125/.minikube}
	I0409 00:29:15.489526   69188 buildroot.go:174] setting up certificates
	I0409 00:29:15.489538   69188 provision.go:84] configureAuth start
	I0409 00:29:15.489552   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetMachineName
	I0409 00:29:15.489834   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetIP
	I0409 00:29:15.492778   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:29:15.493235   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:e7:ae", ip: ""} in network mk-kubernetes-upgrade-636554: {Iface:virbr2 ExpiryTime:2025-04-09 01:28:39 +0000 UTC Type:0 Mac:52:54:00:42:e7:ae Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:kubernetes-upgrade-636554 Clientid:01:52:54:00:42:e7:ae}
	I0409 00:29:15.493267   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined IP address 192.168.50.37 and MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:29:15.493455   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHHostname
	I0409 00:29:15.496870   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:29:15.497357   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:e7:ae", ip: ""} in network mk-kubernetes-upgrade-636554: {Iface:virbr2 ExpiryTime:2025-04-09 01:28:39 +0000 UTC Type:0 Mac:52:54:00:42:e7:ae Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:kubernetes-upgrade-636554 Clientid:01:52:54:00:42:e7:ae}
	I0409 00:29:15.497392   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined IP address 192.168.50.37 and MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:29:15.497659   69188 provision.go:143] copyHostCerts
	I0409 00:29:15.497724   69188 exec_runner.go:144] found /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem, removing ...
	I0409 00:29:15.497748   69188 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem
	I0409 00:29:15.497822   69188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/key.pem (1675 bytes)
	I0409 00:29:15.498025   69188 exec_runner.go:144] found /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem, removing ...
	I0409 00:29:15.498046   69188 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem
	I0409 00:29:15.498085   69188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/ca.pem (1082 bytes)
	I0409 00:29:15.498174   69188 exec_runner.go:144] found /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem, removing ...
	I0409 00:29:15.498184   69188 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem
	I0409 00:29:15.498213   69188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20501-9125/.minikube/cert.pem (1123 bytes)
	I0409 00:29:15.498276   69188 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-636554 san=[127.0.0.1 192.168.50.37 kubernetes-upgrade-636554 localhost minikube]
	I0409 00:29:15.758912   69188 provision.go:177] copyRemoteCerts
	I0409 00:29:15.758956   69188 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0409 00:29:15.758978   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHHostname
	I0409 00:29:15.761708   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:29:15.762049   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:e7:ae", ip: ""} in network mk-kubernetes-upgrade-636554: {Iface:virbr2 ExpiryTime:2025-04-09 01:28:39 +0000 UTC Type:0 Mac:52:54:00:42:e7:ae Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:kubernetes-upgrade-636554 Clientid:01:52:54:00:42:e7:ae}
	I0409 00:29:15.762087   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined IP address 192.168.50.37 and MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:29:15.762274   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHPort
	I0409 00:29:15.762448   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHKeyPath
	I0409 00:29:15.762595   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHUsername
	I0409 00:29:15.762700   69188 sshutil.go:53] new ssh client: &{IP:192.168.50.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/kubernetes-upgrade-636554/id_rsa Username:docker}
	I0409 00:29:15.847297   69188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0409 00:29:15.872671   69188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0409 00:29:15.899124   69188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0409 00:29:15.923057   69188 provision.go:87] duration metric: took 433.506683ms to configureAuth
	I0409 00:29:15.923086   69188 buildroot.go:189] setting minikube options for container-runtime
	I0409 00:29:15.923280   69188 config.go:182] Loaded profile config "kubernetes-upgrade-636554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0409 00:29:15.923366   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHHostname
	I0409 00:29:15.926175   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:29:15.926611   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:e7:ae", ip: ""} in network mk-kubernetes-upgrade-636554: {Iface:virbr2 ExpiryTime:2025-04-09 01:28:39 +0000 UTC Type:0 Mac:52:54:00:42:e7:ae Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:kubernetes-upgrade-636554 Clientid:01:52:54:00:42:e7:ae}
	I0409 00:29:15.926645   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined IP address 192.168.50.37 and MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:29:15.926794   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHPort
	I0409 00:29:15.926958   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHKeyPath
	I0409 00:29:15.927108   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHKeyPath
	I0409 00:29:15.927238   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHUsername
	I0409 00:29:15.927540   69188 main.go:141] libmachine: Using SSH client type: native
	I0409 00:29:15.927760   69188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.37 22 <nil> <nil>}
	I0409 00:29:15.927780   69188 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0409 00:29:16.045213   68181 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0409 00:29:16.045276   68181 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0409 00:29:16.054637   68181 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0409 00:29:16.054727   68181 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0409 00:29:16.064336   68181 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0409 00:29:16.075449   68181 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0409 00:29:16.086385   68181 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0409 00:29:16.096429   68181 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0409 00:29:16.105827   68181 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0409 00:29:16.121844   68181 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0409 00:29:16.131552   68181 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0409 00:29:16.140717   68181 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0409 00:29:16.140756   68181 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0409 00:29:16.153649   68181 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0409 00:29:16.162670   68181 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 00:29:16.281350   68181 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0409 00:29:16.373368   68181 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0409 00:29:16.373425   68181 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0409 00:29:16.378293   68181 start.go:563] Will wait 60s for crictl version
	I0409 00:29:16.378365   68181 ssh_runner.go:195] Run: which crictl
	I0409 00:29:16.381918   68181 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0409 00:29:16.419475   68181 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0409 00:29:16.419551   68181 ssh_runner.go:195] Run: crio --version
	I0409 00:29:16.447538   68181 ssh_runner.go:195] Run: crio --version
	I0409 00:29:16.479001   68181 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0409 00:29:13.188134   70164 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0409 00:29:13.188185   70164 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20501-9125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0409 00:29:13.188197   70164 cache.go:56] Caching tarball of preloaded images
	I0409 00:29:13.188279   70164 preload.go:172] Found /home/jenkins/minikube-integration/20501-9125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0409 00:29:13.188291   70164 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0409 00:29:13.188397   70164 profile.go:143] Saving config to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/bridge-459514/config.json ...
	I0409 00:29:13.188421   70164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/bridge-459514/config.json: {Name:mk76aa9f277e6477f3b4400d6c0a7cde4463236e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:29:13.188563   70164 start.go:360] acquireMachinesLock for bridge-459514: {Name:mke7be7b51cfddf557a39ecf6493fff6a1168ec9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0409 00:29:16.921216   66178 pod_ready.go:103] pod "coredns-668d6bf9bc-xgt7n" in "kube-system" namespace has status "Ready":"False"
	I0409 00:29:18.921482   66178 pod_ready.go:103] pod "coredns-668d6bf9bc-xgt7n" in "kube-system" namespace has status "Ready":"False"
	I0409 00:29:16.480161   68181 main.go:141] libmachine: (flannel-459514) Calling .GetIP
	I0409 00:29:16.483068   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:16.483446   68181 main.go:141] libmachine: (flannel-459514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:26:17", ip: ""} in network mk-flannel-459514: {Iface:virbr4 ExpiryTime:2025-04-09 01:29:03 +0000 UTC Type:0 Mac:52:54:00:b0:26:17 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:flannel-459514 Clientid:01:52:54:00:b0:26:17}
	I0409 00:29:16.483482   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined IP address 192.168.72.137 and MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:16.483687   68181 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0409 00:29:16.487663   68181 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0409 00:29:16.499339   68181 kubeadm.go:883] updating cluster {Name:flannel-459514 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-459514
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0409 00:29:16.499444   68181 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0409 00:29:16.499504   68181 ssh_runner.go:195] Run: sudo crictl images --output json
	I0409 00:29:16.528564   68181 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0409 00:29:16.528617   68181 ssh_runner.go:195] Run: which lz4
	I0409 00:29:16.532166   68181 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0409 00:29:16.536118   68181 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0409 00:29:16.536155   68181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0409 00:29:17.750560   68181 crio.go:462] duration metric: took 1.218423979s to copy over tarball
	I0409 00:29:17.750644   68181 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0409 00:29:19.950261   68181 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.199565314s)
	I0409 00:29:19.950296   68181 crio.go:469] duration metric: took 2.199696454s to extract the tarball
	I0409 00:29:19.950304   68181 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0409 00:29:19.998438   68181 ssh_runner.go:195] Run: sudo crictl images --output json
	I0409 00:29:20.043403   68181 crio.go:514] all images are preloaded for cri-o runtime.
	I0409 00:29:20.043424   68181 cache_images.go:84] Images are preloaded, skipping loading
	I0409 00:29:20.043435   68181 kubeadm.go:934] updating node { 192.168.72.137 8443 v1.32.2 crio true true} ...
	I0409 00:29:20.043542   68181 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-459514 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:flannel-459514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0409 00:29:20.043608   68181 ssh_runner.go:195] Run: crio config
	I0409 00:29:20.094697   68181 cni.go:84] Creating CNI manager for "flannel"
	I0409 00:29:20.094726   68181 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0409 00:29:20.094750   68181 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.137 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-459514 NodeName:flannel-459514 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0409 00:29:20.094920   68181 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-459514"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.137"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.137"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0409 00:29:20.094995   68181 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0409 00:29:20.105024   68181 binaries.go:44] Found k8s binaries, skipping transfer
	I0409 00:29:20.105097   68181 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0409 00:29:20.114512   68181 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0409 00:29:20.130370   68181 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0409 00:29:20.145895   68181 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0409 00:29:20.160380   68181 ssh_runner.go:195] Run: grep 192.168.72.137	control-plane.minikube.internal$ /etc/hosts
	I0409 00:29:20.164125   68181 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0409 00:29:20.175785   68181 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 00:29:20.295824   68181 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0409 00:29:20.313516   68181 certs.go:68] Setting up /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/flannel-459514 for IP: 192.168.72.137
	I0409 00:29:20.313536   68181 certs.go:194] generating shared ca certs ...
	I0409 00:29:20.313566   68181 certs.go:226] acquiring lock for ca certs: {Name:mk0d455aae85017ac942481bbc1202ccedea144f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:29:20.313727   68181 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.key
	I0409 00:29:20.313793   68181 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.key
	I0409 00:29:20.313814   68181 certs.go:256] generating profile certs ...
	I0409 00:29:20.313904   68181 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/flannel-459514/client.key
	I0409 00:29:20.313929   68181 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/flannel-459514/client.crt with IP's: []
	I0409 00:29:20.336737   68181 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/flannel-459514/client.crt ...
	I0409 00:29:20.336765   68181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/flannel-459514/client.crt: {Name:mkc19a896134195c59d2ffe4da27bcf573de50fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:29:20.337420   68181 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/flannel-459514/client.key ...
	I0409 00:29:20.337512   68181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/flannel-459514/client.key: {Name:mkc8d3ae622336e120733c88d6e6b014cb57fec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:29:20.337692   68181 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/flannel-459514/apiserver.key.ece545dd
	I0409 00:29:20.337709   68181 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/flannel-459514/apiserver.crt.ece545dd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.137]
	I0409 00:29:20.703133   68181 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/flannel-459514/apiserver.crt.ece545dd ...
	I0409 00:29:20.703164   68181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/flannel-459514/apiserver.crt.ece545dd: {Name:mk07eca7dfe3c97b8dfcf7d6808df7e11fba6d5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:29:20.703364   68181 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/flannel-459514/apiserver.key.ece545dd ...
	I0409 00:29:20.703387   68181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/flannel-459514/apiserver.key.ece545dd: {Name:mk4f6b47a086e65ce50833f58ebc44b5bf55672b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:29:20.703488   68181 certs.go:381] copying /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/flannel-459514/apiserver.crt.ece545dd -> /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/flannel-459514/apiserver.crt
	I0409 00:29:20.703562   68181 certs.go:385] copying /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/flannel-459514/apiserver.key.ece545dd -> /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/flannel-459514/apiserver.key
	I0409 00:29:20.703613   68181 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/flannel-459514/proxy-client.key
	I0409 00:29:20.703626   68181 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/flannel-459514/proxy-client.crt with IP's: []
	I0409 00:29:20.851951   68181 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/flannel-459514/proxy-client.crt ...
	I0409 00:29:20.851982   68181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/flannel-459514/proxy-client.crt: {Name:mk65b46aedd1f6320e436fc0ec01c1f25ec710cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:29:20.852134   68181 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/flannel-459514/proxy-client.key ...
	I0409 00:29:20.852144   68181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/flannel-459514/proxy-client.key: {Name:mk3e3959470fe3f497dafb7de822a326390af241 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:29:20.852306   68181 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314.pem (1338 bytes)
	W0409 00:29:20.852344   68181 certs.go:480] ignoring /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314_empty.pem, impossibly tiny 0 bytes
	I0409 00:29:20.852351   68181 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem (1679 bytes)
	I0409 00:29:20.852369   68181 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem (1082 bytes)
	I0409 00:29:20.852391   68181 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem (1123 bytes)
	I0409 00:29:20.852416   68181 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem (1675 bytes)
	I0409 00:29:20.852452   68181 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem (1708 bytes)
	I0409 00:29:20.852994   68181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0409 00:29:20.876765   68181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0409 00:29:20.898636   68181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0409 00:29:20.920926   68181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0409 00:29:20.944437   68181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/flannel-459514/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0409 00:29:20.968270   68181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/flannel-459514/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0409 00:29:20.992788   68181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/flannel-459514/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0409 00:29:21.020474   68181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/flannel-459514/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0409 00:29:23.497216   70164 start.go:364] duration metric: took 10.308595713s to acquireMachinesLock for "bridge-459514"
	I0409 00:29:23.497280   70164 start.go:93] Provisioning new machine with config: &{Name:bridge-459514 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:bridge-459514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0409 00:29:23.497413   70164 start.go:125] createHost starting for "" (driver="kvm2")
	I0409 00:29:21.420645   66178 pod_ready.go:103] pod "coredns-668d6bf9bc-xgt7n" in "kube-system" namespace has status "Ready":"False"
	I0409 00:29:23.420840   66178 pod_ready.go:103] pod "coredns-668d6bf9bc-xgt7n" in "kube-system" namespace has status "Ready":"False"
	I0409 00:29:25.421139   66178 pod_ready.go:103] pod "coredns-668d6bf9bc-xgt7n" in "kube-system" namespace has status "Ready":"False"
	I0409 00:29:21.047773   68181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314.pem --> /usr/share/ca-certificates/16314.pem (1338 bytes)
	I0409 00:29:21.072465   68181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem --> /usr/share/ca-certificates/163142.pem (1708 bytes)
	I0409 00:29:21.097867   68181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0409 00:29:21.122345   68181 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0409 00:29:21.139966   68181 ssh_runner.go:195] Run: openssl version
	I0409 00:29:21.146053   68181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163142.pem && ln -fs /usr/share/ca-certificates/163142.pem /etc/ssl/certs/163142.pem"
	I0409 00:29:21.157526   68181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163142.pem
	I0409 00:29:21.162434   68181 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 22:53 /usr/share/ca-certificates/163142.pem
	I0409 00:29:21.162505   68181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163142.pem
	I0409 00:29:21.168571   68181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163142.pem /etc/ssl/certs/3ec20f2e.0"
	I0409 00:29:21.179557   68181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0409 00:29:21.194147   68181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0409 00:29:21.198843   68181 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 22:46 /usr/share/ca-certificates/minikubeCA.pem
	I0409 00:29:21.198902   68181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0409 00:29:21.205134   68181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0409 00:29:21.215690   68181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16314.pem && ln -fs /usr/share/ca-certificates/16314.pem /etc/ssl/certs/16314.pem"
	I0409 00:29:21.227975   68181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16314.pem
	I0409 00:29:21.232351   68181 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 22:53 /usr/share/ca-certificates/16314.pem
	I0409 00:29:21.232407   68181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16314.pem
	I0409 00:29:21.238444   68181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16314.pem /etc/ssl/certs/51391683.0"
	I0409 00:29:21.248834   68181 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0409 00:29:21.252771   68181 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0409 00:29:21.252822   68181 kubeadm.go:392] StartCluster: {Name:flannel-459514 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-459514 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0409 00:29:21.252900   68181 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0409 00:29:21.252946   68181 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0409 00:29:21.289907   68181 cri.go:89] found id: ""
	I0409 00:29:21.289979   68181 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0409 00:29:21.300355   68181 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0409 00:29:21.309156   68181 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0409 00:29:21.318333   68181 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0409 00:29:21.318353   68181 kubeadm.go:157] found existing configuration files:
	
	I0409 00:29:21.318397   68181 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0409 00:29:21.327180   68181 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0409 00:29:21.327223   68181 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0409 00:29:21.335935   68181 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0409 00:29:21.344542   68181 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0409 00:29:21.344599   68181 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0409 00:29:21.353387   68181 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0409 00:29:21.363569   68181 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0409 00:29:21.363629   68181 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0409 00:29:21.372533   68181 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0409 00:29:21.380869   68181 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0409 00:29:21.380912   68181 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0409 00:29:21.389350   68181 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0409 00:29:21.547595   68181 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0409 00:29:23.255632   69188 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0409 00:29:23.255691   69188 machine.go:96] duration metric: took 8.12298333s to provisionDockerMachine
	I0409 00:29:23.255712   69188 start.go:293] postStartSetup for "kubernetes-upgrade-636554" (driver="kvm2")
	I0409 00:29:23.255730   69188 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0409 00:29:23.255756   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .DriverName
	I0409 00:29:23.256114   69188 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0409 00:29:23.256151   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHHostname
	I0409 00:29:23.259065   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:29:23.259404   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:e7:ae", ip: ""} in network mk-kubernetes-upgrade-636554: {Iface:virbr2 ExpiryTime:2025-04-09 01:28:39 +0000 UTC Type:0 Mac:52:54:00:42:e7:ae Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:kubernetes-upgrade-636554 Clientid:01:52:54:00:42:e7:ae}
	I0409 00:29:23.259424   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined IP address 192.168.50.37 and MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:29:23.259613   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHPort
	I0409 00:29:23.259792   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHKeyPath
	I0409 00:29:23.259979   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHUsername
	I0409 00:29:23.260133   69188 sshutil.go:53] new ssh client: &{IP:192.168.50.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/kubernetes-upgrade-636554/id_rsa Username:docker}
	I0409 00:29:23.343673   69188 ssh_runner.go:195] Run: cat /etc/os-release
	I0409 00:29:23.347716   69188 info.go:137] Remote host: Buildroot 2023.02.9
	I0409 00:29:23.347737   69188 filesync.go:126] Scanning /home/jenkins/minikube-integration/20501-9125/.minikube/addons for local assets ...
	I0409 00:29:23.347789   69188 filesync.go:126] Scanning /home/jenkins/minikube-integration/20501-9125/.minikube/files for local assets ...
	I0409 00:29:23.347852   69188 filesync.go:149] local asset: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem -> 163142.pem in /etc/ssl/certs
	I0409 00:29:23.348015   69188 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0409 00:29:23.356971   69188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem --> /etc/ssl/certs/163142.pem (1708 bytes)
	I0409 00:29:23.379395   69188 start.go:296] duration metric: took 123.663069ms for postStartSetup
	I0409 00:29:23.379434   69188 fix.go:56] duration metric: took 8.270636329s for fixHost
	I0409 00:29:23.379459   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHHostname
	I0409 00:29:23.382018   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:29:23.382347   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:e7:ae", ip: ""} in network mk-kubernetes-upgrade-636554: {Iface:virbr2 ExpiryTime:2025-04-09 01:28:39 +0000 UTC Type:0 Mac:52:54:00:42:e7:ae Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:kubernetes-upgrade-636554 Clientid:01:52:54:00:42:e7:ae}
	I0409 00:29:23.382379   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined IP address 192.168.50.37 and MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:29:23.382533   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHPort
	I0409 00:29:23.382727   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHKeyPath
	I0409 00:29:23.382867   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHKeyPath
	I0409 00:29:23.382989   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHUsername
	I0409 00:29:23.383152   69188 main.go:141] libmachine: Using SSH client type: native
	I0409 00:29:23.383458   69188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.37 22 <nil> <nil>}
	I0409 00:29:23.383471   69188 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0409 00:29:23.497050   69188 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744158563.487831756
	
	I0409 00:29:23.497074   69188 fix.go:216] guest clock: 1744158563.487831756
	I0409 00:29:23.497092   69188 fix.go:229] Guest: 2025-04-09 00:29:23.487831756 +0000 UTC Remote: 2025-04-09 00:29:23.379439612 +0000 UTC m=+17.175006828 (delta=108.392144ms)
	I0409 00:29:23.497111   69188 fix.go:200] guest clock delta is within tolerance: 108.392144ms
	I0409 00:29:23.497116   69188 start.go:83] releasing machines lock for "kubernetes-upgrade-636554", held for 8.388369222s
	I0409 00:29:23.497147   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .DriverName
	I0409 00:29:23.497416   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetIP
	I0409 00:29:23.500797   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:29:23.501223   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:e7:ae", ip: ""} in network mk-kubernetes-upgrade-636554: {Iface:virbr2 ExpiryTime:2025-04-09 01:28:39 +0000 UTC Type:0 Mac:52:54:00:42:e7:ae Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:kubernetes-upgrade-636554 Clientid:01:52:54:00:42:e7:ae}
	I0409 00:29:23.501258   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined IP address 192.168.50.37 and MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:29:23.501463   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .DriverName
	I0409 00:29:23.501954   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .DriverName
	I0409 00:29:23.502119   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .DriverName
	I0409 00:29:23.502211   69188 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0409 00:29:23.502250   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHHostname
	I0409 00:29:23.502363   69188 ssh_runner.go:195] Run: cat /version.json
	I0409 00:29:23.502394   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHHostname
	I0409 00:29:23.505238   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:29:23.505440   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:29:23.505645   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:e7:ae", ip: ""} in network mk-kubernetes-upgrade-636554: {Iface:virbr2 ExpiryTime:2025-04-09 01:28:39 +0000 UTC Type:0 Mac:52:54:00:42:e7:ae Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:kubernetes-upgrade-636554 Clientid:01:52:54:00:42:e7:ae}
	I0409 00:29:23.505664   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined IP address 192.168.50.37 and MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:29:23.505829   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:e7:ae", ip: ""} in network mk-kubernetes-upgrade-636554: {Iface:virbr2 ExpiryTime:2025-04-09 01:28:39 +0000 UTC Type:0 Mac:52:54:00:42:e7:ae Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:kubernetes-upgrade-636554 Clientid:01:52:54:00:42:e7:ae}
	I0409 00:29:23.505874   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined IP address 192.168.50.37 and MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:29:23.505880   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHPort
	I0409 00:29:23.506126   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHPort
	I0409 00:29:23.506133   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHKeyPath
	I0409 00:29:23.506326   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHKeyPath
	I0409 00:29:23.506331   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHUsername
	I0409 00:29:23.506460   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetSSHUsername
	I0409 00:29:23.506525   69188 sshutil.go:53] new ssh client: &{IP:192.168.50.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/kubernetes-upgrade-636554/id_rsa Username:docker}
	I0409 00:29:23.506588   69188 sshutil.go:53] new ssh client: &{IP:192.168.50.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/kubernetes-upgrade-636554/id_rsa Username:docker}
	I0409 00:29:23.669493   69188 ssh_runner.go:195] Run: systemctl --version
	I0409 00:29:23.767665   69188 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0409 00:29:24.121787   69188 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0409 00:29:24.127633   69188 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0409 00:29:24.127712   69188 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0409 00:29:24.145237   69188 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0409 00:29:24.145263   69188 start.go:495] detecting cgroup driver to use...
	I0409 00:29:24.145322   69188 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0409 00:29:24.163123   69188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0409 00:29:24.193146   69188 docker.go:217] disabling cri-docker service (if available) ...
	I0409 00:29:24.193212   69188 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0409 00:29:24.225787   69188 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0409 00:29:24.242183   69188 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0409 00:29:24.459429   69188 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0409 00:29:24.663756   69188 docker.go:233] disabling docker service ...
	I0409 00:29:24.663833   69188 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0409 00:29:24.691375   69188 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0409 00:29:24.705811   69188 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0409 00:29:24.867467   69188 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0409 00:29:25.031323   69188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0409 00:29:25.046287   69188 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0409 00:29:25.065891   69188 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0409 00:29:25.065953   69188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0409 00:29:25.077607   69188 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0409 00:29:25.077660   69188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0409 00:29:25.089512   69188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0409 00:29:25.101265   69188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0409 00:29:25.111609   69188 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0409 00:29:25.123926   69188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0409 00:29:25.134944   69188 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0409 00:29:25.145144   69188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0409 00:29:25.155801   69188 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0409 00:29:25.165370   69188 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0409 00:29:25.175298   69188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 00:29:25.348791   69188 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0409 00:29:23.499400   70164 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0409 00:29:23.499588   70164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0409 00:29:23.499656   70164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0409 00:29:23.516412   70164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37519
	I0409 00:29:23.516757   70164 main.go:141] libmachine: () Calling .GetVersion
	I0409 00:29:23.517254   70164 main.go:141] libmachine: Using API Version  1
	I0409 00:29:23.517303   70164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0409 00:29:23.517657   70164 main.go:141] libmachine: () Calling .GetMachineName
	I0409 00:29:23.517839   70164 main.go:141] libmachine: (bridge-459514) Calling .GetMachineName
	I0409 00:29:23.517996   70164 main.go:141] libmachine: (bridge-459514) Calling .DriverName
	I0409 00:29:23.518158   70164 start.go:159] libmachine.API.Create for "bridge-459514" (driver="kvm2")
	I0409 00:29:23.518189   70164 client.go:168] LocalClient.Create starting
	I0409 00:29:23.518220   70164 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem
	I0409 00:29:23.518272   70164 main.go:141] libmachine: Decoding PEM data...
	I0409 00:29:23.518291   70164 main.go:141] libmachine: Parsing certificate...
	I0409 00:29:23.518360   70164 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem
	I0409 00:29:23.518393   70164 main.go:141] libmachine: Decoding PEM data...
	I0409 00:29:23.518411   70164 main.go:141] libmachine: Parsing certificate...
	I0409 00:29:23.518431   70164 main.go:141] libmachine: Running pre-create checks...
	I0409 00:29:23.518450   70164 main.go:141] libmachine: (bridge-459514) Calling .PreCreateCheck
	I0409 00:29:23.518821   70164 main.go:141] libmachine: (bridge-459514) Calling .GetConfigRaw
	I0409 00:29:23.519249   70164 main.go:141] libmachine: Creating machine...
	I0409 00:29:23.519265   70164 main.go:141] libmachine: (bridge-459514) Calling .Create
	I0409 00:29:23.519405   70164 main.go:141] libmachine: (bridge-459514) creating KVM machine...
	I0409 00:29:23.519425   70164 main.go:141] libmachine: (bridge-459514) creating network...
	I0409 00:29:23.520719   70164 main.go:141] libmachine: (bridge-459514) DBG | found existing default KVM network
	I0409 00:29:23.521991   70164 main.go:141] libmachine: (bridge-459514) DBG | I0409 00:29:23.521833   70295 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000013c90}
	I0409 00:29:23.522012   70164 main.go:141] libmachine: (bridge-459514) DBG | created network xml: 
	I0409 00:29:23.522031   70164 main.go:141] libmachine: (bridge-459514) DBG | <network>
	I0409 00:29:23.522042   70164 main.go:141] libmachine: (bridge-459514) DBG |   <name>mk-bridge-459514</name>
	I0409 00:29:23.522052   70164 main.go:141] libmachine: (bridge-459514) DBG |   <dns enable='no'/>
	I0409 00:29:23.522059   70164 main.go:141] libmachine: (bridge-459514) DBG |   
	I0409 00:29:23.522070   70164 main.go:141] libmachine: (bridge-459514) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0409 00:29:23.522080   70164 main.go:141] libmachine: (bridge-459514) DBG |     <dhcp>
	I0409 00:29:23.522091   70164 main.go:141] libmachine: (bridge-459514) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0409 00:29:23.522119   70164 main.go:141] libmachine: (bridge-459514) DBG |     </dhcp>
	I0409 00:29:23.522130   70164 main.go:141] libmachine: (bridge-459514) DBG |   </ip>
	I0409 00:29:23.522146   70164 main.go:141] libmachine: (bridge-459514) DBG |   
	I0409 00:29:23.522155   70164 main.go:141] libmachine: (bridge-459514) DBG | </network>
	I0409 00:29:23.522165   70164 main.go:141] libmachine: (bridge-459514) DBG | 
	I0409 00:29:23.526834   70164 main.go:141] libmachine: (bridge-459514) DBG | trying to create private KVM network mk-bridge-459514 192.168.39.0/24...
	I0409 00:29:23.606951   70164 main.go:141] libmachine: (bridge-459514) DBG | private KVM network mk-bridge-459514 192.168.39.0/24 created
	I0409 00:29:23.606994   70164 main.go:141] libmachine: (bridge-459514) DBG | I0409 00:29:23.606954   70295 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20501-9125/.minikube
	I0409 00:29:23.607012   70164 main.go:141] libmachine: (bridge-459514) setting up store path in /home/jenkins/minikube-integration/20501-9125/.minikube/machines/bridge-459514 ...
	I0409 00:29:23.607025   70164 main.go:141] libmachine: (bridge-459514) building disk image from file:///home/jenkins/minikube-integration/20501-9125/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0409 00:29:23.607195   70164 main.go:141] libmachine: (bridge-459514) Downloading /home/jenkins/minikube-integration/20501-9125/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20501-9125/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0409 00:29:23.865611   70164 main.go:141] libmachine: (bridge-459514) DBG | I0409 00:29:23.865463   70295 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/bridge-459514/id_rsa...
	I0409 00:29:24.095203   70164 main.go:141] libmachine: (bridge-459514) DBG | I0409 00:29:24.095069   70295 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/bridge-459514/bridge-459514.rawdisk...
	I0409 00:29:24.095228   70164 main.go:141] libmachine: (bridge-459514) DBG | Writing magic tar header
	I0409 00:29:24.095237   70164 main.go:141] libmachine: (bridge-459514) DBG | Writing SSH key tar header
	I0409 00:29:24.095299   70164 main.go:141] libmachine: (bridge-459514) DBG | I0409 00:29:24.095241   70295 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20501-9125/.minikube/machines/bridge-459514 ...
	I0409 00:29:24.095398   70164 main.go:141] libmachine: (bridge-459514) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20501-9125/.minikube/machines/bridge-459514
	I0409 00:29:24.095423   70164 main.go:141] libmachine: (bridge-459514) setting executable bit set on /home/jenkins/minikube-integration/20501-9125/.minikube/machines/bridge-459514 (perms=drwx------)
	I0409 00:29:24.095439   70164 main.go:141] libmachine: (bridge-459514) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20501-9125/.minikube/machines
	I0409 00:29:24.095455   70164 main.go:141] libmachine: (bridge-459514) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20501-9125/.minikube
	I0409 00:29:24.095468   70164 main.go:141] libmachine: (bridge-459514) setting executable bit set on /home/jenkins/minikube-integration/20501-9125/.minikube/machines (perms=drwxr-xr-x)
	I0409 00:29:24.095483   70164 main.go:141] libmachine: (bridge-459514) setting executable bit set on /home/jenkins/minikube-integration/20501-9125/.minikube (perms=drwxr-xr-x)
	I0409 00:29:24.095496   70164 main.go:141] libmachine: (bridge-459514) setting executable bit set on /home/jenkins/minikube-integration/20501-9125 (perms=drwxrwxr-x)
	I0409 00:29:24.095509   70164 main.go:141] libmachine: (bridge-459514) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0409 00:29:24.095522   70164 main.go:141] libmachine: (bridge-459514) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0409 00:29:24.095533   70164 main.go:141] libmachine: (bridge-459514) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20501-9125
	I0409 00:29:24.095545   70164 main.go:141] libmachine: (bridge-459514) creating domain...
	I0409 00:29:24.095559   70164 main.go:141] libmachine: (bridge-459514) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0409 00:29:24.095582   70164 main.go:141] libmachine: (bridge-459514) DBG | checking permissions on dir: /home/jenkins
	I0409 00:29:24.095601   70164 main.go:141] libmachine: (bridge-459514) DBG | checking permissions on dir: /home
	I0409 00:29:24.095624   70164 main.go:141] libmachine: (bridge-459514) DBG | skipping /home - not owner
	I0409 00:29:24.097033   70164 main.go:141] libmachine: (bridge-459514) define libvirt domain using xml: 
	I0409 00:29:24.097056   70164 main.go:141] libmachine: (bridge-459514) <domain type='kvm'>
	I0409 00:29:24.097065   70164 main.go:141] libmachine: (bridge-459514)   <name>bridge-459514</name>
	I0409 00:29:24.097073   70164 main.go:141] libmachine: (bridge-459514)   <memory unit='MiB'>3072</memory>
	I0409 00:29:24.097079   70164 main.go:141] libmachine: (bridge-459514)   <vcpu>2</vcpu>
	I0409 00:29:24.097098   70164 main.go:141] libmachine: (bridge-459514)   <features>
	I0409 00:29:24.097109   70164 main.go:141] libmachine: (bridge-459514)     <acpi/>
	I0409 00:29:24.097115   70164 main.go:141] libmachine: (bridge-459514)     <apic/>
	I0409 00:29:24.097128   70164 main.go:141] libmachine: (bridge-459514)     <pae/>
	I0409 00:29:24.097134   70164 main.go:141] libmachine: (bridge-459514)     
	I0409 00:29:24.097142   70164 main.go:141] libmachine: (bridge-459514)   </features>
	I0409 00:29:24.097149   70164 main.go:141] libmachine: (bridge-459514)   <cpu mode='host-passthrough'>
	I0409 00:29:24.097187   70164 main.go:141] libmachine: (bridge-459514)   
	I0409 00:29:24.097213   70164 main.go:141] libmachine: (bridge-459514)   </cpu>
	I0409 00:29:24.097227   70164 main.go:141] libmachine: (bridge-459514)   <os>
	I0409 00:29:24.097237   70164 main.go:141] libmachine: (bridge-459514)     <type>hvm</type>
	I0409 00:29:24.097258   70164 main.go:141] libmachine: (bridge-459514)     <boot dev='cdrom'/>
	I0409 00:29:24.097269   70164 main.go:141] libmachine: (bridge-459514)     <boot dev='hd'/>
	I0409 00:29:24.097278   70164 main.go:141] libmachine: (bridge-459514)     <bootmenu enable='no'/>
	I0409 00:29:24.097295   70164 main.go:141] libmachine: (bridge-459514)   </os>
	I0409 00:29:24.097305   70164 main.go:141] libmachine: (bridge-459514)   <devices>
	I0409 00:29:24.097313   70164 main.go:141] libmachine: (bridge-459514)     <disk type='file' device='cdrom'>
	I0409 00:29:24.097329   70164 main.go:141] libmachine: (bridge-459514)       <source file='/home/jenkins/minikube-integration/20501-9125/.minikube/machines/bridge-459514/boot2docker.iso'/>
	I0409 00:29:24.097340   70164 main.go:141] libmachine: (bridge-459514)       <target dev='hdc' bus='scsi'/>
	I0409 00:29:24.097348   70164 main.go:141] libmachine: (bridge-459514)       <readonly/>
	I0409 00:29:24.097371   70164 main.go:141] libmachine: (bridge-459514)     </disk>
	I0409 00:29:24.097388   70164 main.go:141] libmachine: (bridge-459514)     <disk type='file' device='disk'>
	I0409 00:29:24.097434   70164 main.go:141] libmachine: (bridge-459514)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0409 00:29:24.097459   70164 main.go:141] libmachine: (bridge-459514)       <source file='/home/jenkins/minikube-integration/20501-9125/.minikube/machines/bridge-459514/bridge-459514.rawdisk'/>
	I0409 00:29:24.097471   70164 main.go:141] libmachine: (bridge-459514)       <target dev='hda' bus='virtio'/>
	I0409 00:29:24.097480   70164 main.go:141] libmachine: (bridge-459514)     </disk>
	I0409 00:29:24.097501   70164 main.go:141] libmachine: (bridge-459514)     <interface type='network'>
	I0409 00:29:24.097513   70164 main.go:141] libmachine: (bridge-459514)       <source network='mk-bridge-459514'/>
	I0409 00:29:24.097527   70164 main.go:141] libmachine: (bridge-459514)       <model type='virtio'/>
	I0409 00:29:24.097534   70164 main.go:141] libmachine: (bridge-459514)     </interface>
	I0409 00:29:24.097544   70164 main.go:141] libmachine: (bridge-459514)     <interface type='network'>
	I0409 00:29:24.097552   70164 main.go:141] libmachine: (bridge-459514)       <source network='default'/>
	I0409 00:29:24.097561   70164 main.go:141] libmachine: (bridge-459514)       <model type='virtio'/>
	I0409 00:29:24.097571   70164 main.go:141] libmachine: (bridge-459514)     </interface>
	I0409 00:29:24.097581   70164 main.go:141] libmachine: (bridge-459514)     <serial type='pty'>
	I0409 00:29:24.097646   70164 main.go:141] libmachine: (bridge-459514)       <target port='0'/>
	I0409 00:29:24.097661   70164 main.go:141] libmachine: (bridge-459514)     </serial>
	I0409 00:29:24.097668   70164 main.go:141] libmachine: (bridge-459514)     <console type='pty'>
	I0409 00:29:24.097695   70164 main.go:141] libmachine: (bridge-459514)       <target type='serial' port='0'/>
	I0409 00:29:24.097711   70164 main.go:141] libmachine: (bridge-459514)     </console>
	I0409 00:29:24.097720   70164 main.go:141] libmachine: (bridge-459514)     <rng model='virtio'>
	I0409 00:29:24.097729   70164 main.go:141] libmachine: (bridge-459514)       <backend model='random'>/dev/random</backend>
	I0409 00:29:24.097740   70164 main.go:141] libmachine: (bridge-459514)     </rng>
	I0409 00:29:24.097758   70164 main.go:141] libmachine: (bridge-459514)     
	I0409 00:29:24.097774   70164 main.go:141] libmachine: (bridge-459514)     
	I0409 00:29:24.097791   70164 main.go:141] libmachine: (bridge-459514)   </devices>
	I0409 00:29:24.097808   70164 main.go:141] libmachine: (bridge-459514) </domain>
	I0409 00:29:24.097824   70164 main.go:141] libmachine: (bridge-459514) 
	I0409 00:29:24.102178   70164 main.go:141] libmachine: (bridge-459514) DBG | domain bridge-459514 has defined MAC address 52:54:00:4e:b5:f7 in network default
	I0409 00:29:24.102965   70164 main.go:141] libmachine: (bridge-459514) DBG | domain bridge-459514 has defined MAC address 52:54:00:69:11:f3 in network mk-bridge-459514
	I0409 00:29:24.102980   70164 main.go:141] libmachine: (bridge-459514) starting domain...
	I0409 00:29:24.102991   70164 main.go:141] libmachine: (bridge-459514) ensuring networks are active...
	I0409 00:29:24.103814   70164 main.go:141] libmachine: (bridge-459514) Ensuring network default is active
	I0409 00:29:24.104242   70164 main.go:141] libmachine: (bridge-459514) Ensuring network mk-bridge-459514 is active
	I0409 00:29:24.104751   70164 main.go:141] libmachine: (bridge-459514) getting domain XML...
	I0409 00:29:24.105568   70164 main.go:141] libmachine: (bridge-459514) creating domain...
	I0409 00:29:25.407545   70164 main.go:141] libmachine: (bridge-459514) waiting for IP...
	I0409 00:29:25.408294   70164 main.go:141] libmachine: (bridge-459514) DBG | domain bridge-459514 has defined MAC address 52:54:00:69:11:f3 in network mk-bridge-459514
	I0409 00:29:25.408730   70164 main.go:141] libmachine: (bridge-459514) DBG | unable to find current IP address of domain bridge-459514 in network mk-bridge-459514
	I0409 00:29:25.408796   70164 main.go:141] libmachine: (bridge-459514) DBG | I0409 00:29:25.408727   70295 retry.go:31] will retry after 282.808026ms: waiting for domain to come up
	I0409 00:29:25.693250   70164 main.go:141] libmachine: (bridge-459514) DBG | domain bridge-459514 has defined MAC address 52:54:00:69:11:f3 in network mk-bridge-459514
	I0409 00:29:25.693826   70164 main.go:141] libmachine: (bridge-459514) DBG | unable to find current IP address of domain bridge-459514 in network mk-bridge-459514
	I0409 00:29:25.693850   70164 main.go:141] libmachine: (bridge-459514) DBG | I0409 00:29:25.693795   70295 retry.go:31] will retry after 315.46844ms: waiting for domain to come up
	I0409 00:29:26.011315   70164 main.go:141] libmachine: (bridge-459514) DBG | domain bridge-459514 has defined MAC address 52:54:00:69:11:f3 in network mk-bridge-459514
	I0409 00:29:26.011888   70164 main.go:141] libmachine: (bridge-459514) DBG | unable to find current IP address of domain bridge-459514 in network mk-bridge-459514
	I0409 00:29:26.011922   70164 main.go:141] libmachine: (bridge-459514) DBG | I0409 00:29:26.011835   70295 retry.go:31] will retry after 323.073246ms: waiting for domain to come up
	I0409 00:29:26.336568   70164 main.go:141] libmachine: (bridge-459514) DBG | domain bridge-459514 has defined MAC address 52:54:00:69:11:f3 in network mk-bridge-459514
	I0409 00:29:26.337271   70164 main.go:141] libmachine: (bridge-459514) DBG | unable to find current IP address of domain bridge-459514 in network mk-bridge-459514
	I0409 00:29:26.337296   70164 main.go:141] libmachine: (bridge-459514) DBG | I0409 00:29:26.337250   70295 retry.go:31] will retry after 570.187835ms: waiting for domain to come up
	I0409 00:29:26.909145   70164 main.go:141] libmachine: (bridge-459514) DBG | domain bridge-459514 has defined MAC address 52:54:00:69:11:f3 in network mk-bridge-459514
	I0409 00:29:26.909608   70164 main.go:141] libmachine: (bridge-459514) DBG | unable to find current IP address of domain bridge-459514 in network mk-bridge-459514
	I0409 00:29:26.909635   70164 main.go:141] libmachine: (bridge-459514) DBG | I0409 00:29:26.909593   70295 retry.go:31] will retry after 522.186357ms: waiting for domain to come up
	I0409 00:29:27.433229   70164 main.go:141] libmachine: (bridge-459514) DBG | domain bridge-459514 has defined MAC address 52:54:00:69:11:f3 in network mk-bridge-459514
	I0409 00:29:27.433863   70164 main.go:141] libmachine: (bridge-459514) DBG | unable to find current IP address of domain bridge-459514 in network mk-bridge-459514
	I0409 00:29:27.433904   70164 main.go:141] libmachine: (bridge-459514) DBG | I0409 00:29:27.433842   70295 retry.go:31] will retry after 923.155758ms: waiting for domain to come up
	I0409 00:29:27.421013   66178 pod_ready.go:93] pod "coredns-668d6bf9bc-xgt7n" in "kube-system" namespace has status "Ready":"True"
	I0409 00:29:27.421041   66178 pod_ready.go:82] duration metric: took 34.506217884s for pod "coredns-668d6bf9bc-xgt7n" in "kube-system" namespace to be "Ready" ...
	I0409 00:29:27.421054   66178 pod_ready.go:79] waiting up to 15m0s for pod "etcd-enable-default-cni-459514" in "kube-system" namespace to be "Ready" ...
	I0409 00:29:27.425225   66178 pod_ready.go:93] pod "etcd-enable-default-cni-459514" in "kube-system" namespace has status "Ready":"True"
	I0409 00:29:27.425252   66178 pod_ready.go:82] duration metric: took 4.19046ms for pod "etcd-enable-default-cni-459514" in "kube-system" namespace to be "Ready" ...
	I0409 00:29:27.425265   66178 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-459514" in "kube-system" namespace to be "Ready" ...
	I0409 00:29:27.430276   66178 pod_ready.go:93] pod "kube-apiserver-enable-default-cni-459514" in "kube-system" namespace has status "Ready":"True"
	I0409 00:29:27.430308   66178 pod_ready.go:82] duration metric: took 5.033574ms for pod "kube-apiserver-enable-default-cni-459514" in "kube-system" namespace to be "Ready" ...
	I0409 00:29:27.430324   66178 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-459514" in "kube-system" namespace to be "Ready" ...
	I0409 00:29:27.434914   66178 pod_ready.go:93] pod "kube-controller-manager-enable-default-cni-459514" in "kube-system" namespace has status "Ready":"True"
	I0409 00:29:27.434934   66178 pod_ready.go:82] duration metric: took 4.600685ms for pod "kube-controller-manager-enable-default-cni-459514" in "kube-system" namespace to be "Ready" ...
	I0409 00:29:27.434960   66178 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-nn22p" in "kube-system" namespace to be "Ready" ...
	I0409 00:29:27.439013   66178 pod_ready.go:93] pod "kube-proxy-nn22p" in "kube-system" namespace has status "Ready":"True"
	I0409 00:29:27.439032   66178 pod_ready.go:82] duration metric: took 4.063565ms for pod "kube-proxy-nn22p" in "kube-system" namespace to be "Ready" ...
	I0409 00:29:27.439042   66178 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-459514" in "kube-system" namespace to be "Ready" ...
	I0409 00:29:27.819597   66178 pod_ready.go:93] pod "kube-scheduler-enable-default-cni-459514" in "kube-system" namespace has status "Ready":"True"
	I0409 00:29:27.819630   66178 pod_ready.go:82] duration metric: took 380.57979ms for pod "kube-scheduler-enable-default-cni-459514" in "kube-system" namespace to be "Ready" ...
	I0409 00:29:27.819646   66178 pod_ready.go:39] duration metric: took 36.607118635s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0409 00:29:27.819668   66178 api_server.go:52] waiting for apiserver process to appear ...
	I0409 00:29:27.819732   66178 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0409 00:29:27.837871   66178 api_server.go:72] duration metric: took 38.418985198s to wait for apiserver process to appear ...
	I0409 00:29:27.837901   66178 api_server.go:88] waiting for apiserver healthz status ...
	I0409 00:29:27.837923   66178 api_server.go:253] Checking apiserver healthz at https://192.168.61.244:8443/healthz ...
	I0409 00:29:27.844440   66178 api_server.go:279] https://192.168.61.244:8443/healthz returned 200:
	ok
	I0409 00:29:27.845449   66178 api_server.go:141] control plane version: v1.32.2
	I0409 00:29:27.845475   66178 api_server.go:131] duration metric: took 7.565208ms to wait for apiserver health ...
	I0409 00:29:27.845486   66178 system_pods.go:43] waiting for kube-system pods to appear ...
	I0409 00:29:28.020704   66178 system_pods.go:59] 7 kube-system pods found
	I0409 00:29:28.020738   66178 system_pods.go:61] "coredns-668d6bf9bc-xgt7n" [2596682e-ffbc-4240-bd1d-5ef949d88fb3] Running
	I0409 00:29:28.020746   66178 system_pods.go:61] "etcd-enable-default-cni-459514" [cfb8fcfd-144b-4915-99e9-eed5c229fe1c] Running
	I0409 00:29:28.020753   66178 system_pods.go:61] "kube-apiserver-enable-default-cni-459514" [e6644538-8928-4fc2-9fbd-20ae8a8db939] Running
	I0409 00:29:28.020759   66178 system_pods.go:61] "kube-controller-manager-enable-default-cni-459514" [feb9f24c-266b-403f-811a-5a54a7b6b176] Running
	I0409 00:29:28.020764   66178 system_pods.go:61] "kube-proxy-nn22p" [8ff006e4-cb96-4170-91d2-bb19dfa99ba1] Running
	I0409 00:29:28.020770   66178 system_pods.go:61] "kube-scheduler-enable-default-cni-459514" [11bee6e5-2a50-48c4-a549-ebdacc1f77dd] Running
	I0409 00:29:28.020775   66178 system_pods.go:61] "storage-provisioner" [a56be8b3-5642-4b74-83c5-acfd706b313c] Running
	I0409 00:29:28.020784   66178 system_pods.go:74] duration metric: took 175.290708ms to wait for pod list to return data ...
	I0409 00:29:28.020795   66178 default_sa.go:34] waiting for default service account to be created ...
	I0409 00:29:28.219549   66178 default_sa.go:45] found service account: "default"
	I0409 00:29:28.219581   66178 default_sa.go:55] duration metric: took 198.776649ms for default service account to be created ...
	I0409 00:29:28.219593   66178 system_pods.go:116] waiting for k8s-apps to be running ...
	I0409 00:29:28.420638   66178 system_pods.go:86] 7 kube-system pods found
	I0409 00:29:28.420665   66178 system_pods.go:89] "coredns-668d6bf9bc-xgt7n" [2596682e-ffbc-4240-bd1d-5ef949d88fb3] Running
	I0409 00:29:28.420672   66178 system_pods.go:89] "etcd-enable-default-cni-459514" [cfb8fcfd-144b-4915-99e9-eed5c229fe1c] Running
	I0409 00:29:28.420676   66178 system_pods.go:89] "kube-apiserver-enable-default-cni-459514" [e6644538-8928-4fc2-9fbd-20ae8a8db939] Running
	I0409 00:29:28.420680   66178 system_pods.go:89] "kube-controller-manager-enable-default-cni-459514" [feb9f24c-266b-403f-811a-5a54a7b6b176] Running
	I0409 00:29:28.420683   66178 system_pods.go:89] "kube-proxy-nn22p" [8ff006e4-cb96-4170-91d2-bb19dfa99ba1] Running
	I0409 00:29:28.420687   66178 system_pods.go:89] "kube-scheduler-enable-default-cni-459514" [11bee6e5-2a50-48c4-a549-ebdacc1f77dd] Running
	I0409 00:29:28.420691   66178 system_pods.go:89] "storage-provisioner" [a56be8b3-5642-4b74-83c5-acfd706b313c] Running
	I0409 00:29:28.420697   66178 system_pods.go:126] duration metric: took 201.098085ms to wait for k8s-apps to be running ...
	I0409 00:29:28.420703   66178 system_svc.go:44] waiting for kubelet service to be running ....
	I0409 00:29:28.420743   66178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0409 00:29:28.438204   66178 system_svc.go:56] duration metric: took 17.49033ms WaitForService to wait for kubelet
	I0409 00:29:28.438240   66178 kubeadm.go:582] duration metric: took 39.01935975s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0409 00:29:28.438261   66178 node_conditions.go:102] verifying NodePressure condition ...
	I0409 00:29:28.620032   66178 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0409 00:29:28.620072   66178 node_conditions.go:123] node cpu capacity is 2
	I0409 00:29:28.620088   66178 node_conditions.go:105] duration metric: took 181.821603ms to run NodePressure ...
	I0409 00:29:28.620102   66178 start.go:241] waiting for startup goroutines ...
	I0409 00:29:28.620111   66178 start.go:246] waiting for cluster config update ...
	I0409 00:29:28.620126   66178 start.go:255] writing updated cluster config ...
	I0409 00:29:28.620413   66178 ssh_runner.go:195] Run: rm -f paused
	I0409 00:29:28.668100   66178 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0409 00:29:28.670656   66178 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-459514" cluster and "default" namespace by default
	I0409 00:29:31.596034   68181 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0409 00:29:31.596114   68181 kubeadm.go:310] [preflight] Running pre-flight checks
	I0409 00:29:31.596239   68181 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0409 00:29:31.596397   68181 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0409 00:29:31.596531   68181 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0409 00:29:31.596605   68181 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0409 00:29:31.598010   68181 out.go:235]   - Generating certificates and keys ...
	I0409 00:29:31.598115   68181 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0409 00:29:31.598223   68181 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0409 00:29:31.598331   68181 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0409 00:29:31.598408   68181 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0409 00:29:31.598494   68181 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0409 00:29:31.598574   68181 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0409 00:29:31.598671   68181 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0409 00:29:31.598871   68181 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-459514 localhost] and IPs [192.168.72.137 127.0.0.1 ::1]
	I0409 00:29:31.598979   68181 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0409 00:29:31.599126   68181 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-459514 localhost] and IPs [192.168.72.137 127.0.0.1 ::1]
	I0409 00:29:31.599230   68181 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0409 00:29:31.599320   68181 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0409 00:29:31.599391   68181 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0409 00:29:31.599470   68181 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0409 00:29:31.599549   68181 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0409 00:29:31.599641   68181 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0409 00:29:31.599716   68181 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0409 00:29:31.599811   68181 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0409 00:29:31.599965   68181 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0409 00:29:31.600082   68181 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0409 00:29:31.600166   68181 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0409 00:29:31.602276   68181 out.go:235]   - Booting up control plane ...
	I0409 00:29:31.602408   68181 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0409 00:29:31.602527   68181 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0409 00:29:31.602626   68181 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0409 00:29:31.602774   68181 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0409 00:29:31.602947   68181 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0409 00:29:31.602999   68181 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0409 00:29:31.603170   68181 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0409 00:29:31.603308   68181 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0409 00:29:31.603381   68181 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001779039s
	I0409 00:29:31.603490   68181 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0409 00:29:31.603581   68181 kubeadm.go:310] [api-check] The API server is healthy after 5.002162668s
	I0409 00:29:31.603740   68181 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0409 00:29:31.603933   68181 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0409 00:29:31.604016   68181 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0409 00:29:31.604260   68181 kubeadm.go:310] [mark-control-plane] Marking the node flannel-459514 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0409 00:29:31.604342   68181 kubeadm.go:310] [bootstrap-token] Using token: ci6234.texxgeil0p1wfp61
	I0409 00:29:31.605485   68181 out.go:235]   - Configuring RBAC rules ...
	I0409 00:29:31.605641   68181 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0409 00:29:31.605752   68181 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0409 00:29:31.605957   68181 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0409 00:29:31.606138   68181 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0409 00:29:31.606307   68181 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0409 00:29:31.606430   68181 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0409 00:29:31.606586   68181 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0409 00:29:31.606668   68181 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0409 00:29:31.606736   68181 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0409 00:29:31.606746   68181 kubeadm.go:310] 
	I0409 00:29:31.606827   68181 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0409 00:29:31.606836   68181 kubeadm.go:310] 
	I0409 00:29:31.606962   68181 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0409 00:29:31.606970   68181 kubeadm.go:310] 
	I0409 00:29:31.607005   68181 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0409 00:29:31.607089   68181 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0409 00:29:31.607166   68181 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0409 00:29:31.607180   68181 kubeadm.go:310] 
	I0409 00:29:31.607255   68181 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0409 00:29:31.607263   68181 kubeadm.go:310] 
	I0409 00:29:31.607330   68181 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0409 00:29:31.607339   68181 kubeadm.go:310] 
	I0409 00:29:31.607415   68181 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0409 00:29:31.607498   68181 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0409 00:29:31.607582   68181 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0409 00:29:31.607589   68181 kubeadm.go:310] 
	I0409 00:29:31.607712   68181 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0409 00:29:31.607826   68181 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0409 00:29:31.607833   68181 kubeadm.go:310] 
	I0409 00:29:31.607960   68181 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ci6234.texxgeil0p1wfp61 \
	I0409 00:29:31.608114   68181 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b5297d9bc4c0ea922a06282e0375039318a097df0c51ae921cb5fce714787b8b \
	I0409 00:29:31.608146   68181 kubeadm.go:310] 	--control-plane 
	I0409 00:29:31.608162   68181 kubeadm.go:310] 
	I0409 00:29:31.608292   68181 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0409 00:29:31.608302   68181 kubeadm.go:310] 
	I0409 00:29:31.608428   68181 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ci6234.texxgeil0p1wfp61 \
	I0409 00:29:31.608593   68181 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b5297d9bc4c0ea922a06282e0375039318a097df0c51ae921cb5fce714787b8b 
	I0409 00:29:31.608606   68181 cni.go:84] Creating CNI manager for "flannel"
	I0409 00:29:31.610032   68181 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0409 00:29:28.358393   70164 main.go:141] libmachine: (bridge-459514) DBG | domain bridge-459514 has defined MAC address 52:54:00:69:11:f3 in network mk-bridge-459514
	I0409 00:29:28.358881   70164 main.go:141] libmachine: (bridge-459514) DBG | unable to find current IP address of domain bridge-459514 in network mk-bridge-459514
	I0409 00:29:28.358916   70164 main.go:141] libmachine: (bridge-459514) DBG | I0409 00:29:28.358838   70295 retry.go:31] will retry after 959.278801ms: waiting for domain to come up
	I0409 00:29:29.319520   70164 main.go:141] libmachine: (bridge-459514) DBG | domain bridge-459514 has defined MAC address 52:54:00:69:11:f3 in network mk-bridge-459514
	I0409 00:29:29.320118   70164 main.go:141] libmachine: (bridge-459514) DBG | unable to find current IP address of domain bridge-459514 in network mk-bridge-459514
	I0409 00:29:29.320146   70164 main.go:141] libmachine: (bridge-459514) DBG | I0409 00:29:29.320103   70295 retry.go:31] will retry after 1.125545389s: waiting for domain to come up
	I0409 00:29:30.447802   70164 main.go:141] libmachine: (bridge-459514) DBG | domain bridge-459514 has defined MAC address 52:54:00:69:11:f3 in network mk-bridge-459514
	I0409 00:29:30.448335   70164 main.go:141] libmachine: (bridge-459514) DBG | unable to find current IP address of domain bridge-459514 in network mk-bridge-459514
	I0409 00:29:30.448364   70164 main.go:141] libmachine: (bridge-459514) DBG | I0409 00:29:30.448306   70295 retry.go:31] will retry after 1.192583063s: waiting for domain to come up
	I0409 00:29:31.642041   70164 main.go:141] libmachine: (bridge-459514) DBG | domain bridge-459514 has defined MAC address 52:54:00:69:11:f3 in network mk-bridge-459514
	I0409 00:29:31.642622   70164 main.go:141] libmachine: (bridge-459514) DBG | unable to find current IP address of domain bridge-459514 in network mk-bridge-459514
	I0409 00:29:31.642652   70164 main.go:141] libmachine: (bridge-459514) DBG | I0409 00:29:31.642576   70295 retry.go:31] will retry after 1.563835618s: waiting for domain to come up
	I0409 00:29:34.992798   69188 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.643950037s)
	I0409 00:29:34.992833   69188 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0409 00:29:34.992888   69188 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0409 00:29:34.997825   69188 start.go:563] Will wait 60s for crictl version
	I0409 00:29:34.997889   69188 ssh_runner.go:195] Run: which crictl
	I0409 00:29:35.001944   69188 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0409 00:29:35.036625   69188 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0409 00:29:35.036729   69188 ssh_runner.go:195] Run: crio --version
	I0409 00:29:35.066982   69188 ssh_runner.go:195] Run: crio --version
	I0409 00:29:35.098225   69188 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0409 00:29:31.611221   68181 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0409 00:29:31.617409   68181 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0409 00:29:31.617425   68181 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4348 bytes)
	I0409 00:29:31.639623   68181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0409 00:29:32.049250   68181 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0409 00:29:32.049319   68181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0409 00:29:32.049381   68181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-459514 minikube.k8s.io/updated_at=2025_04_09T00_29_32_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=fd2f4c3eba2bd452b5997c855e28d0966165ba83 minikube.k8s.io/name=flannel-459514 minikube.k8s.io/primary=true
	I0409 00:29:32.229514   68181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0409 00:29:32.229608   68181 ops.go:34] apiserver oom_adj: -16
	I0409 00:29:32.730246   68181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0409 00:29:33.230135   68181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0409 00:29:33.730044   68181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0409 00:29:34.230473   68181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0409 00:29:34.729762   68181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0409 00:29:35.229597   68181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0409 00:29:35.347729   68181 kubeadm.go:1113] duration metric: took 3.298472407s to wait for elevateKubeSystemPrivileges
	I0409 00:29:35.347761   68181 kubeadm.go:394] duration metric: took 14.094942548s to StartCluster
	I0409 00:29:35.347780   68181 settings.go:142] acquiring lock: {Name:mk362ccb6fac1c71fdd578f798171322d97c1c2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:29:35.347855   68181 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20501-9125/kubeconfig
	I0409 00:29:35.349426   68181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20501-9125/kubeconfig: {Name:mk92c92b166b121ee2ee28c1b362d82cfe16b47a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:29:35.349722   68181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0409 00:29:35.349730   68181 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0409 00:29:35.349811   68181 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0409 00:29:35.349949   68181 config.go:182] Loaded profile config "flannel-459514": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0409 00:29:35.349960   68181 addons.go:69] Setting storage-provisioner=true in profile "flannel-459514"
	I0409 00:29:35.349981   68181 addons.go:238] Setting addon storage-provisioner=true in "flannel-459514"
	I0409 00:29:35.350008   68181 addons.go:69] Setting default-storageclass=true in profile "flannel-459514"
	I0409 00:29:35.350026   68181 host.go:66] Checking if "flannel-459514" exists ...
	I0409 00:29:35.350034   68181 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-459514"
	I0409 00:29:35.350461   68181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0409 00:29:35.350508   68181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0409 00:29:35.350539   68181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0409 00:29:35.350584   68181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0409 00:29:35.352457   68181 out.go:177] * Verifying Kubernetes components...
	I0409 00:29:35.355842   68181 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 00:29:35.369104   68181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34747
	I0409 00:29:35.369738   68181 main.go:141] libmachine: () Calling .GetVersion
	I0409 00:29:35.370261   68181 main.go:141] libmachine: Using API Version  1
	I0409 00:29:35.370286   68181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0409 00:29:35.370700   68181 main.go:141] libmachine: () Calling .GetMachineName
	I0409 00:29:35.370868   68181 main.go:141] libmachine: (flannel-459514) Calling .GetState
	I0409 00:29:35.370903   68181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38113
	I0409 00:29:35.371798   68181 main.go:141] libmachine: () Calling .GetVersion
	I0409 00:29:35.372434   68181 main.go:141] libmachine: Using API Version  1
	I0409 00:29:35.372452   68181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0409 00:29:35.372920   68181 main.go:141] libmachine: () Calling .GetMachineName
	I0409 00:29:35.374984   68181 addons.go:238] Setting addon default-storageclass=true in "flannel-459514"
	I0409 00:29:35.375029   68181 host.go:66] Checking if "flannel-459514" exists ...
	I0409 00:29:35.375420   68181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0409 00:29:35.375461   68181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0409 00:29:35.376071   68181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0409 00:29:35.376112   68181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0409 00:29:35.397200   68181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39395
	I0409 00:29:35.397666   68181 main.go:141] libmachine: () Calling .GetVersion
	I0409 00:29:35.399331   68181 main.go:141] libmachine: Using API Version  1
	I0409 00:29:35.399354   68181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0409 00:29:35.399728   68181 main.go:141] libmachine: () Calling .GetMachineName
	I0409 00:29:35.399880   68181 main.go:141] libmachine: (flannel-459514) Calling .GetState
	I0409 00:29:35.406206   68181 main.go:141] libmachine: (flannel-459514) Calling .DriverName
	I0409 00:29:35.407741   68181 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0409 00:29:35.410084   68181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43269
	I0409 00:29:35.410704   68181 main.go:141] libmachine: () Calling .GetVersion
	I0409 00:29:35.411228   68181 main.go:141] libmachine: Using API Version  1
	I0409 00:29:35.411253   68181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0409 00:29:35.411629   68181 main.go:141] libmachine: () Calling .GetMachineName
	I0409 00:29:35.412222   68181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0409 00:29:35.412271   68181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0409 00:29:35.417042   68181 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0409 00:29:35.417058   68181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0409 00:29:35.417078   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHHostname
	I0409 00:29:35.426889   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:35.429884   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHPort
	I0409 00:29:35.429976   68181 main.go:141] libmachine: (flannel-459514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:26:17", ip: ""} in network mk-flannel-459514: {Iface:virbr4 ExpiryTime:2025-04-09 01:29:03 +0000 UTC Type:0 Mac:52:54:00:b0:26:17 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:flannel-459514 Clientid:01:52:54:00:b0:26:17}
	I0409 00:29:35.429995   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined IP address 192.168.72.137 and MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:35.430892   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHKeyPath
	I0409 00:29:35.431083   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHUsername
	I0409 00:29:35.431263   68181 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/flannel-459514/id_rsa Username:docker}
	I0409 00:29:35.438361   68181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32867
	I0409 00:29:35.438982   68181 main.go:141] libmachine: () Calling .GetVersion
	I0409 00:29:35.439674   68181 main.go:141] libmachine: Using API Version  1
	I0409 00:29:35.439699   68181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0409 00:29:35.440121   68181 main.go:141] libmachine: () Calling .GetMachineName
	I0409 00:29:35.440388   68181 main.go:141] libmachine: (flannel-459514) Calling .GetState
	I0409 00:29:35.442281   68181 main.go:141] libmachine: (flannel-459514) Calling .DriverName
	I0409 00:29:35.442547   68181 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0409 00:29:35.442561   68181 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0409 00:29:35.442576   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHHostname
	I0409 00:29:35.446020   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:35.446412   68181 main.go:141] libmachine: (flannel-459514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:26:17", ip: ""} in network mk-flannel-459514: {Iface:virbr4 ExpiryTime:2025-04-09 01:29:03 +0000 UTC Type:0 Mac:52:54:00:b0:26:17 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:flannel-459514 Clientid:01:52:54:00:b0:26:17}
	I0409 00:29:35.446437   68181 main.go:141] libmachine: (flannel-459514) DBG | domain flannel-459514 has defined IP address 192.168.72.137 and MAC address 52:54:00:b0:26:17 in network mk-flannel-459514
	I0409 00:29:35.446642   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHPort
	I0409 00:29:35.446837   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHKeyPath
	I0409 00:29:35.447002   68181 main.go:141] libmachine: (flannel-459514) Calling .GetSSHUsername
	I0409 00:29:35.447159   68181 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/flannel-459514/id_rsa Username:docker}
	I0409 00:29:35.640801   68181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0409 00:29:35.677103   68181 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0409 00:29:35.871658   68181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0409 00:29:35.937844   68181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0409 00:29:35.099249   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) Calling .GetIP
	I0409 00:29:35.102070   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:29:35.102491   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:e7:ae", ip: ""} in network mk-kubernetes-upgrade-636554: {Iface:virbr2 ExpiryTime:2025-04-09 01:28:39 +0000 UTC Type:0 Mac:52:54:00:42:e7:ae Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:kubernetes-upgrade-636554 Clientid:01:52:54:00:42:e7:ae}
	I0409 00:29:35.102527   69188 main.go:141] libmachine: (kubernetes-upgrade-636554) DBG | domain kubernetes-upgrade-636554 has defined IP address 192.168.50.37 and MAC address 52:54:00:42:e7:ae in network mk-kubernetes-upgrade-636554
	I0409 00:29:35.102795   69188 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0409 00:29:35.107176   69188 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-636554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kube
rnetes-upgrade-636554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.37 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0409 00:29:35.107299   69188 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0409 00:29:35.107358   69188 ssh_runner.go:195] Run: sudo crictl images --output json
	I0409 00:29:35.158176   69188 crio.go:514] all images are preloaded for cri-o runtime.
	I0409 00:29:35.158203   69188 crio.go:433] Images already preloaded, skipping extraction
	I0409 00:29:35.158256   69188 ssh_runner.go:195] Run: sudo crictl images --output json
	I0409 00:29:35.192892   69188 crio.go:514] all images are preloaded for cri-o runtime.
	I0409 00:29:35.192917   69188 cache_images.go:84] Images are preloaded, skipping loading
	I0409 00:29:35.192925   69188 kubeadm.go:934] updating node { 192.168.50.37 8443 v1.32.2 crio true true} ...
	I0409 00:29:35.193075   69188 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-636554 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.37
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:kubernetes-upgrade-636554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0409 00:29:35.193176   69188 ssh_runner.go:195] Run: crio config
	I0409 00:29:35.258631   69188 cni.go:84] Creating CNI manager for ""
	I0409 00:29:35.258665   69188 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0409 00:29:35.258682   69188 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0409 00:29:35.258708   69188 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.37 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-636554 NodeName:kubernetes-upgrade-636554 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.37"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.37 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0409 00:29:35.258875   69188 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.37
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-636554"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.37"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.37"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0409 00:29:35.258957   69188 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0409 00:29:35.269132   69188 binaries.go:44] Found k8s binaries, skipping transfer
	I0409 00:29:35.269213   69188 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0409 00:29:35.279008   69188 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0409 00:29:35.301350   69188 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0409 00:29:35.321083   69188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2302 bytes)
	I0409 00:29:35.343263   69188 ssh_runner.go:195] Run: grep 192.168.50.37	control-plane.minikube.internal$ /etc/hosts
	I0409 00:29:35.348346   69188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 00:29:35.525147   69188 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0409 00:29:35.541266   69188 certs.go:68] Setting up /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554 for IP: 192.168.50.37
	I0409 00:29:35.541293   69188 certs.go:194] generating shared ca certs ...
	I0409 00:29:35.541323   69188 certs.go:226] acquiring lock for ca certs: {Name:mk0d455aae85017ac942481bbc1202ccedea144f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:29:35.541484   69188 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.key
	I0409 00:29:35.541553   69188 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.key
	I0409 00:29:35.541574   69188 certs.go:256] generating profile certs ...
	I0409 00:29:35.541700   69188 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/client.key
	I0409 00:29:35.541766   69188 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/apiserver.key.08cdde78
	I0409 00:29:35.541815   69188 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/proxy-client.key
	I0409 00:29:35.541971   69188 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314.pem (1338 bytes)
	W0409 00:29:35.542008   69188 certs.go:480] ignoring /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314_empty.pem, impossibly tiny 0 bytes
	I0409 00:29:35.542021   69188 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca-key.pem (1679 bytes)
	I0409 00:29:35.542062   69188 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/ca.pem (1082 bytes)
	I0409 00:29:35.542093   69188 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/cert.pem (1123 bytes)
	I0409 00:29:35.542130   69188 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/certs/key.pem (1675 bytes)
	I0409 00:29:35.542194   69188 certs.go:484] found cert: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem (1708 bytes)
	I0409 00:29:35.543014   69188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0409 00:29:35.572959   69188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0409 00:29:35.605304   69188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0409 00:29:35.631724   69188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0409 00:29:35.662509   69188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0409 00:29:35.694995   69188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0409 00:29:35.819681   69188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0409 00:29:35.974376   69188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/kubernetes-upgrade-636554/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0409 00:29:36.188087   69188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0409 00:29:36.187731   68181 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0409 00:29:36.189085   68181 node_ready.go:35] waiting up to 15m0s for node "flannel-459514" to be "Ready" ...
	I0409 00:29:36.699989   68181 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-459514" context rescaled to 1 replicas
	I0409 00:29:36.749608   68181 main.go:141] libmachine: Making call to close driver server
	I0409 00:29:36.749770   68181 main.go:141] libmachine: (flannel-459514) Calling .Close
	I0409 00:29:36.749751   68181 main.go:141] libmachine: Making call to close driver server
	I0409 00:29:36.749864   68181 main.go:141] libmachine: (flannel-459514) Calling .Close
	I0409 00:29:36.750236   68181 main.go:141] libmachine: (flannel-459514) DBG | Closing plugin on server side
	I0409 00:29:36.750254   68181 main.go:141] libmachine: (flannel-459514) DBG | Closing plugin on server side
	I0409 00:29:36.750254   68181 main.go:141] libmachine: Successfully made call to close driver server
	I0409 00:29:36.750264   68181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0409 00:29:36.750276   68181 main.go:141] libmachine: Making call to close driver server
	I0409 00:29:36.750277   68181 main.go:141] libmachine: Successfully made call to close driver server
	I0409 00:29:36.750284   68181 main.go:141] libmachine: (flannel-459514) Calling .Close
	I0409 00:29:36.750286   68181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0409 00:29:36.750293   68181 main.go:141] libmachine: Making call to close driver server
	I0409 00:29:36.750298   68181 main.go:141] libmachine: (flannel-459514) Calling .Close
	I0409 00:29:36.750606   68181 main.go:141] libmachine: Successfully made call to close driver server
	I0409 00:29:36.750623   68181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0409 00:29:36.750693   68181 main.go:141] libmachine: (flannel-459514) DBG | Closing plugin on server side
	I0409 00:29:36.750735   68181 main.go:141] libmachine: Successfully made call to close driver server
	I0409 00:29:36.750743   68181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0409 00:29:36.772494   68181 main.go:141] libmachine: Making call to close driver server
	I0409 00:29:36.772520   68181 main.go:141] libmachine: (flannel-459514) Calling .Close
	I0409 00:29:36.772927   68181 main.go:141] libmachine: Successfully made call to close driver server
	I0409 00:29:36.772948   68181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0409 00:29:36.772948   68181 main.go:141] libmachine: (flannel-459514) DBG | Closing plugin on server side
	I0409 00:29:36.774682   68181 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0409 00:29:33.208551   70164 main.go:141] libmachine: (bridge-459514) DBG | domain bridge-459514 has defined MAC address 52:54:00:69:11:f3 in network mk-bridge-459514
	I0409 00:29:33.209127   70164 main.go:141] libmachine: (bridge-459514) DBG | unable to find current IP address of domain bridge-459514 in network mk-bridge-459514
	I0409 00:29:33.209166   70164 main.go:141] libmachine: (bridge-459514) DBG | I0409 00:29:33.209097   70295 retry.go:31] will retry after 2.09301919s: waiting for domain to come up
	I0409 00:29:35.304626   70164 main.go:141] libmachine: (bridge-459514) DBG | domain bridge-459514 has defined MAC address 52:54:00:69:11:f3 in network mk-bridge-459514
	I0409 00:29:35.305037   70164 main.go:141] libmachine: (bridge-459514) DBG | unable to find current IP address of domain bridge-459514 in network mk-bridge-459514
	I0409 00:29:35.305114   70164 main.go:141] libmachine: (bridge-459514) DBG | I0409 00:29:35.305036   70295 retry.go:31] will retry after 2.72092381s: waiting for domain to come up
	I0409 00:29:38.027841   70164 main.go:141] libmachine: (bridge-459514) DBG | domain bridge-459514 has defined MAC address 52:54:00:69:11:f3 in network mk-bridge-459514
	I0409 00:29:38.028351   70164 main.go:141] libmachine: (bridge-459514) DBG | unable to find current IP address of domain bridge-459514 in network mk-bridge-459514
	I0409 00:29:38.028413   70164 main.go:141] libmachine: (bridge-459514) DBG | I0409 00:29:38.028350   70295 retry.go:31] will retry after 3.424500728s: waiting for domain to come up
	I0409 00:29:36.775903   68181 addons.go:514] duration metric: took 1.426098612s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0409 00:29:38.191782   68181 node_ready.go:53] node "flannel-459514" has status "Ready":"False"
	I0409 00:29:40.192709   68181 node_ready.go:53] node "flannel-459514" has status "Ready":"False"
	I0409 00:29:36.281936   69188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/certs/16314.pem --> /usr/share/ca-certificates/16314.pem (1338 bytes)
	I0409 00:29:36.328190   69188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/ssl/certs/163142.pem --> /usr/share/ca-certificates/163142.pem (1708 bytes)
	I0409 00:29:36.380770   69188 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0409 00:29:36.408033   69188 ssh_runner.go:195] Run: openssl version
	I0409 00:29:36.435434   69188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0409 00:29:36.464060   69188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0409 00:29:36.473966   69188 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 22:46 /usr/share/ca-certificates/minikubeCA.pem
	I0409 00:29:36.474037   69188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0409 00:29:36.485583   69188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0409 00:29:36.506110   69188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16314.pem && ln -fs /usr/share/ca-certificates/16314.pem /etc/ssl/certs/16314.pem"
	I0409 00:29:36.528832   69188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16314.pem
	I0409 00:29:36.539700   69188 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 22:53 /usr/share/ca-certificates/16314.pem
	I0409 00:29:36.539830   69188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16314.pem
	I0409 00:29:36.551898   69188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16314.pem /etc/ssl/certs/51391683.0"
	I0409 00:29:36.573546   69188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163142.pem && ln -fs /usr/share/ca-certificates/163142.pem /etc/ssl/certs/163142.pem"
	I0409 00:29:36.587898   69188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163142.pem
	I0409 00:29:36.592768   69188 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 22:53 /usr/share/ca-certificates/163142.pem
	I0409 00:29:36.592827   69188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163142.pem
	I0409 00:29:36.601277   69188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163142.pem /etc/ssl/certs/3ec20f2e.0"
	I0409 00:29:36.611838   69188 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0409 00:29:36.620862   69188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0409 00:29:36.630220   69188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0409 00:29:36.636251   69188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0409 00:29:36.645197   69188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0409 00:29:36.652991   69188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0409 00:29:36.660403   69188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0409 00:29:36.666184   69188 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-636554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kuberne
tes-upgrade-636554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.37 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0409 00:29:36.666272   69188 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0409 00:29:36.666340   69188 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0409 00:29:36.712599   69188 cri.go:89] found id: "1bb4dbd9ad01c23ed7c48aa5b6954322c9c89e4b47dce85b530687bf90b78bee"
	I0409 00:29:36.712624   69188 cri.go:89] found id: "6c781cc5ec1d957aa69d43cf3affec2e9064b52de61e77921b73f53aed0e3089"
	I0409 00:29:36.712629   69188 cri.go:89] found id: "731f8bbd9b774997481dcad3205e2bc4c57bd0530c10187f34943891ec692eed"
	I0409 00:29:36.712634   69188 cri.go:89] found id: "0d6ee11c6ff142b31c2312437e5906d9e28cf348051d0f4d081b873976157c86"
	I0409 00:29:36.712638   69188 cri.go:89] found id: "d70eab403996d218f1affae1d10f6c8c0c78a0ddc094244caf0a191e67ccf461"
	I0409 00:29:36.712642   69188 cri.go:89] found id: "5bf3a548bc58a32a1322cf2b3b29a4956b59f26db84d905a2654e617b511675d"
	I0409 00:29:36.712646   69188 cri.go:89] found id: "f75afb3172604df3adc26e63c07226a4ffe4f6b872d681ef8e8dd3068c1ca01a"
	I0409 00:29:36.712650   69188 cri.go:89] found id: "8778ebf71322d6698b20a14767d70d076a10cffd7372e73ccac1cb8b0695b311"
	I0409 00:29:36.712655   69188 cri.go:89] found id: ""
	I0409 00:29:36.712703   69188 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-636554 -n kubernetes-upgrade-636554
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-636554 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-636554" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-636554
--- FAIL: TestKubernetesUpgrade (397.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (7200.055s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.140:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.140:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.140:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.140:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.140:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.140:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.140:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.140:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.140:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.140:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.140:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.140:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.140:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.140:8443: connect: connection refused
E0409 00:44:40.974043   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/no-preload-436551/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.140:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.140:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.140:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.140:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.140:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.140:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.140:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.140:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.140:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.140:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.140:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.140:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.140:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.140:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.140:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.140:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.140:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.140:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.140:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.140:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.140:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.140:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.140:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.140:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.140:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.140:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.140:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.140:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.140:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.140:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.140:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.140:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.140:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.140:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.140:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.140:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.140:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.140:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.140:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.140:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.140:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.140:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.140:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.140:8443: connect: connection refused
E0409 00:45:03.154824   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/default-k8s-diff-port-303895/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.140:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.140:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.140:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.140:8443: connect: connection refused
panic: test timed out after 2h0m0s
	running tests:
		TestStartStop (24m18s)
		TestStartStop/group/old-k8s-version (15m17s)
		TestStartStop/group/old-k8s-version/serial (15m17s)
		TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (31s)

                                                
                                                
goroutine 3560 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2484 +0x394
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 6 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1753 +0x49b
testing.tRunner(0xc000485180, 0xc0005e1bc8)
	/usr/local/go/src/testing/testing.go:1798 +0x12d
testing.runTests(0xc000788198, {0x55010e0, 0x2c, 0x2c}, {0xffffffffffffffff?, 0xc0005bf450?, 0x55289a0?})
	/usr/local/go/src/testing/testing.go:2277 +0x4b4
testing.(*M).Run(0xc000d0ca00)
	/usr/local/go/src/testing/testing.go:2142 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc000d0ca00)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:133 +0xa8

                                                
                                                
goroutine 2198 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3aaf690, 0xc0000840e0}, 0xc001a2f750, 0xc001a2f798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3aaf690, 0xc0000840e0}, 0x90?, 0xc001a2f750, 0xc001a2f798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3aaf690?, 0xc0000840e0?}, 0xc001a141c0?, 0x55e020?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x5944a5?, 0xc000d10180?, 0xc0005c6690?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2185
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 32 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 31
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3082 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3aaf690, 0xc0000840e0}, 0xc000d3cf50, 0xc000d3cf98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3aaf690, 0xc0000840e0}, 0x0?, 0xc000d3cf50, 0xc000d3cf98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3aaf690?, 0xc0000840e0?}, 0x28d0060?, 0xc000d3cfb8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000d3cfd0?, 0x9e0625?, 0xc001d121c0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3106
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 135 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3ac0600)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:311 +0x345
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 120
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:148 +0x245

                                                
                                                
goroutine 2232 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc000a3c750, 0x13)
	/usr/local/go/src/runtime/sema.go:597 +0x159
sync.(*Cond).Wait(0xc0000c9d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3ac3540)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a3c8c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:159 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0000ee008?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001f7a010, {0x3a6ffa0, 0xc001daa030}, 0x1, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001f7a010, 0x3b9aca00, 0x0, 0x1, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2294
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 31 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3aaf690, 0xc0000840e0}, 0xc0005f1f50, 0xc0005f1f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3aaf690, 0xc0000840e0}, 0xa0?, 0xc0005f1f50, 0xc0005f1f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3aaf690?, 0xc0000840e0?}, 0x0?, 0x55e020?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x5944a5?, 0xc001494300?, 0xc0005c62a0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 136
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3499 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3498
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 161 [select, 117 minutes]:
net/http.(*persistConn).readLoop(0xc001438240)
	/usr/local/go/src/net/http/transport.go:2395 +0xc5f
created by net/http.(*Transport).dialConn in goroutine 166
	/usr/local/go/src/net/http/transport.go:1944 +0x174f

                                                
                                                
goroutine 30 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000783210, 0x2d)
	/usr/local/go/src/runtime/sema.go:597 +0x159
sync.(*Cond).Wait(0xc0005f0d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3ac3540)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000783280)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:159 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0000bf008?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000720320, {0x3a6ffa0, 0xc0005c4120}, 0x1, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000720320, 0x3b9aca00, 0x0, 0x1, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 136
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 136 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000783280, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 120
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 194 [select, 117 minutes]:
net/http.(*persistConn).writeLoop(0xc001438240)
	/usr/local/go/src/net/http/transport.go:2590 +0xe7
created by net/http.(*Transport).dialConn in goroutine 166
	/usr/local/go/src/net/http/transport.go:1945 +0x17a5

                                                
                                                
goroutine 164 [select, 117 minutes]:
net/http.(*persistConn).writeLoop(0xc00089efc0)
	/usr/local/go/src/net/http/transport.go:2590 +0xe7
created by net/http.(*Transport).dialConn in goroutine 178
	/usr/local/go/src/net/http/transport.go:1945 +0x17a5

                                                
                                                
goroutine 163 [select, 117 minutes]:
net/http.(*persistConn).readLoop(0xc00089efc0)
	/usr/local/go/src/net/http/transport.go:2395 +0xc5f
created by net/http.(*Transport).dialConn in goroutine 178
	/usr/local/go/src/net/http/transport.go:1944 +0x174f

                                                
                                                
goroutine 3441 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x3aaf300, 0xc00054f1f0}, {0x3a9e450, 0xc000a4b020}, 0x1, 0x0, 0xc00008fbe8)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x3aaf300?, 0xc000576b60?}, 0x3b9aca00, 0xc00008fe10?, 0x1, 0xc00008fbe8)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x3aaf300, 0xc000576b60}, 0xc001a1e000, {0xc001480390, 0x16}, {0x2d9c565, 0x14}, {0x2db30b2, 0x1c}, 0x7dba821800)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:371 +0x3a5
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x3aaf300, 0xc000576b60}, 0xc001a1e000, {0xc001480390, 0x16}, {0x2d8e022?, 0xc00048bf60?}, {0x55d633?, 0x4b6853?}, {0xc00091d800, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:272 +0x139
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001a1e000)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x66
testing.tRunner(0xc001a1e000, 0xc00185a080)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 2762
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 3523 [chan receive]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0009a8d40, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3441
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3045 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0009a9990, 0x11)
	/usr/local/go/src/runtime/sema.go:597 +0x159
sync.(*Cond).Wait(0xc00146dd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3ac3540)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0009a9a00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:159 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x552b0a0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001696a70, {0x3a6ffa0, 0xc00162a0f0}, 0x1, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001696a70, 0x3b9aca00, 0x0, 0x1, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3061
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2528 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3ac0600)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:311 +0x345
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2552
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:148 +0x245

                                                
                                                
goroutine 576 [IO wait, 112 minutes]:
internal/poll.runtime_pollWait(0x7f7251cb7d08, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0001ba780?, 0x422ef6?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0001ba780)
	/usr/local/go/src/internal/poll/fd_unix.go:620 +0x295
net.(*netFD).accept(0xc0001ba780)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc000d1e740)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1b
net.(*TCPListener).Accept(0xc000d1e740)
	/usr/local/go/src/net/tcpsock.go:380 +0x30
net/http.(*Server).Serve(0xc001402000, {0x3a9de20, 0xc000d1e740})
	/usr/local/go/src/net/http/server.go:3424 +0x30c
net/http.(*Server).ListenAndServe(0xc001402000)
	/usr/local/go/src/net/http/server.go:3350 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2230
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 573
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2229 +0x129

                                                
                                                
goroutine 2841 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3aaf690, 0xc0000840e0}, 0xc00048bf50, 0xc001acef98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3aaf690, 0xc0000840e0}, 0x20?, 0xc00048bf50, 0xc00048bf98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3aaf690?, 0xc0000840e0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x5944a5?, 0xc00189e300?, 0xc001a0fb20?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2885
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2184 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3ac0600)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:311 +0x345
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2180
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:148 +0x245

                                                
                                                
goroutine 2762 [chan receive]:
testing.(*T).Run(0xc001a14000, {0x2da1c89?, 0xc001445570?}, 0xc00185a080)
	/usr/local/go/src/testing/testing.go:1859 +0x431
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001a14000)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x2af
testing.tRunner(0xc001a14000, 0xc00051aa80)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 2015
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 2014 [chan receive, 24 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1753 +0x49b
testing.tRunner(0xc000582fc0, 0x3710cb8)
	/usr/local/go/src/testing/testing.go:1798 +0x12d
created by testing.(*T).Run in goroutine 1572
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 2681 [chan receive, 16 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0019c1400, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2709
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2197 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0009a9210, 0x14)
	/usr/local/go/src/runtime/sema.go:597 +0x159
sync.(*Cond).Wait(0xc001476d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3ac3540)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0009a9240)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:159 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x552b0a0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001f7a410, {0x3a6ffa0, 0xc001848240}, 0x1, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001f7a410, 0x3b9aca00, 0x0, 0x1, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2185
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 1572 [chan receive, 24 minutes]:
testing.(*T).Run(0xc00147ae00, {0x2d77fe6?, 0x55d633?}, 0x3710cb8)
	/usr/local/go/src/testing/testing.go:1859 +0x431
k8s.io/minikube/test/integration.TestStartStop(0xc00147ae00)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc00147ae00, 0x3710ac8)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 2015 [chan receive, 15 minutes]:
testing.(*T).Run(0xc0000f8700, {0x2d793da?, 0x0?}, 0xc00051aa80)
	/usr/local/go/src/testing/testing.go:1859 +0x431
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0000f8700)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0xad9
testing.tRunner(0xc0000f8700, 0xc000d1f2c0)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 2014
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 2410 [chan receive, 18 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0019c1cc0, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2408
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2842 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2841
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2293 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3ac0600)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:311 +0x345
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2292
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:148 +0x245

                                                
                                                
goroutine 2558 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2557
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3046 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3aaf690, 0xc0000840e0}, 0xc001ccef50, 0xc001ccef98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3aaf690, 0xc0000840e0}, 0x0?, 0xc001ccef50, 0xc001ccef98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3aaf690?, 0xc0000840e0?}, 0x0?, 0x10000000055e020?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001696750?, 0xc0018443b8?, 0xc001ccefa8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3061
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2199 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2198
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3498 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3aaf690, 0xc0000840e0}, 0xc001ad2f50, 0xc001ad2f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3aaf690, 0xc0000840e0}, 0xa0?, 0xc001ad2f50, 0xc001ad2f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3aaf690?, 0xc0000840e0?}, 0x0?, 0xc0000b7fd0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0000b7fd0?, 0x594504?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3523
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2693 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2692
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2233 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3aaf690, 0xc0000840e0}, 0xc0005fdf50, 0xc0005fdf98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3aaf690, 0xc0000840e0}, 0xb0?, 0xc0005fdf50, 0xc0005fdf98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3aaf690?, 0xc0000840e0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x5944a5?, 0xc001677200?, 0xc0001113b0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2294
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2529 [chan receive, 16 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000960e80, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2552
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2691 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0019c13d0, 0x13)
	/usr/local/go/src/runtime/sema.go:597 +0x159
sync.(*Cond).Wait(0xc001ad0d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3ac3540)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0019c1400)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:159 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000500008?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000720a50, {0x3a6ffa0, 0xc001848c60}, 0x1, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000720a50, 0x3b9aca00, 0x0, 0x1, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2681
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2556 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000960dd0, 0x13)
	/usr/local/go/src/runtime/sema.go:597 +0x159
sync.(*Cond).Wait(0xc0000cad80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3ac3540)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000960e80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:159 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00048e008?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0005c8700, {0x3a6ffa0, 0xc000ce11a0}, 0x1, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0005c8700, 0x3b9aca00, 0x0, 0x1, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2529
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2294 [chan receive, 18 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a3c8c0, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2292
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2409 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3ac0600)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:311 +0x345
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2408
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:148 +0x245

                                                
                                                
goroutine 2234 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2233
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2557 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3aaf690, 0xc0000840e0}, 0xc001a2d750, 0xc001a2d798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3aaf690, 0xc0000840e0}, 0x0?, 0xc001a2d750, 0xc001a2d798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3aaf690?, 0xc0000840e0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001a2d7d0?, 0x594504?, 0xc000110700?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2529
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2185 [chan receive, 20 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0009a9240, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2180
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2380 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0019c1c90, 0x13)
	/usr/local/go/src/runtime/sema.go:597 +0x159
sync.(*Cond).Wait(0xc001474d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3ac3540)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0019c1cc0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:159 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00048e008?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001f8dbb0, {0x3a6ffa0, 0xc0008b6600}, 0x1, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001f8dbb0, 0x3b9aca00, 0x0, 0x1, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2410
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2382 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2381
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2381 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3aaf690, 0xc0000840e0}, 0xc001bf4f50, 0xc001bf4f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3aaf690, 0xc0000840e0}, 0x60?, 0xc001bf4f50, 0xc001bf4f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3aaf690?, 0xc0000840e0?}, 0xc001a14c40?, 0x55e020?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x5944a5?, 0xc001601080?, 0xc00160d960?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2410
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2692 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3aaf690, 0xc0000840e0}, 0xc001447750, 0xc0000c8f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3aaf690, 0xc0000840e0}, 0xf0?, 0xc001447750, 0xc001447798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3aaf690?, 0xc0000840e0?}, 0x7ed215?, 0xc0005a0680?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x5944a5?, 0xc001795e00?, 0xc001428af0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2681
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2814 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3ac0600)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:311 +0x345
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2813
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:148 +0x245

                                                
                                                
goroutine 2680 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3ac0600)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:311 +0x345
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2709
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:148 +0x245

                                                
                                                
goroutine 3073 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3ac0600)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:311 +0x345
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3069
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:148 +0x245

                                                
                                                
goroutine 2815 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a3cbc0, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2813
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2840 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc000d1e5d0, 0x12)
	/usr/local/go/src/runtime/sema.go:597 +0x159
sync.(*Cond).Wait(0xc001acdd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3ac3540)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000d1e600)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:159 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000580008?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001f8d370, {0x3a6ffa0, 0xc000775c80}, 0x1, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001f8d370, 0x3b9aca00, 0x0, 0x1, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2885
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3081 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000783090, 0x11)
	/usr/local/go/src/runtime/sema.go:597 +0x159
sync.(*Cond).Wait(0xc00163ed80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3ac3540)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000783180)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:159 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x552b0a0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0008c0fa0, {0x3a6ffa0, 0xc0015f1440}, 0x1, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008c0fa0, 0x3b9aca00, 0x0, 0x1, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3106
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3083 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3082
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2749 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc000a3cb90, 0x12)
	/usr/local/go/src/runtime/sema.go:597 +0x159
sync.(*Cond).Wait(0xc001479d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3ac3540)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a3cbc0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:159 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x552b0a0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000720390, {0x3a6ffa0, 0xc0008f6030}, 0x1, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000720390, 0x3b9aca00, 0x0, 0x1, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2815
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3060 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3ac0600)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:311 +0x345
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2960
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:148 +0x245

                                                
                                                
goroutine 2750 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3aaf690, 0xc0000840e0}, 0xc000d3bf50, 0xc000d3bf98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3aaf690, 0xc0000840e0}, 0x90?, 0xc000d3bf50, 0xc000d3bf98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3aaf690?, 0xc0000840e0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000d3bfd0?, 0x594504?, 0xc002081570?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2815
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2751 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2750
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3061 [chan receive, 14 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0009a9a00, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2960
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2885 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000d1e600, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2836
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2884 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3ac0600)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:311 +0x345
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2836
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:148 +0x245

                                                
                                                
goroutine 3047 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3046
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3106 [chan receive, 14 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000783180, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3069
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3522 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3ac0600)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:311 +0x345
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3441
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:148 +0x245

                                                
                                                
goroutine 3497 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0009a8c90, 0x0)
	/usr/local/go/src/runtime/sema.go:597 +0x159
sync.(*Cond).Wait(0xc001643d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3ac3540)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0009a8d40)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:159 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000a00008?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001db0010, {0x3a6ffa0, 0xc001c1c030}, 0x1, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001db0010, 0x3b9aca00, 0x0, 0x1, 0xc0000840e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3523
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                    

Test pass (181/220)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 23.44
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.32.2/json-events 13.02
13 TestDownloadOnly/v1.32.2/preload-exists 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.06
18 TestDownloadOnly/v1.32.2/DeleteAll 0.13
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.59
22 TestOffline 86.63
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 134.19
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/FakeCredentials 11.47
35 TestAddons/parallel/Registry 16.77
37 TestAddons/parallel/InspektorGadget 11.9
38 TestAddons/parallel/MetricsServer 6.13
40 TestAddons/parallel/CSI 59.54
41 TestAddons/parallel/Headlamp 18.7
42 TestAddons/parallel/CloudSpanner 5.55
43 TestAddons/parallel/LocalPath 56.43
44 TestAddons/parallel/NvidiaDevicePlugin 6.57
45 TestAddons/parallel/Yakd 11.84
47 TestAddons/StoppedEnableDisable 91.22
48 TestCertOptions 85.82
49 TestCertExpiration 276.25
51 TestForceSystemdFlag 67.34
52 TestForceSystemdEnv 69.89
54 TestKVMDriverInstallOrUpdate 5.12
58 TestErrorSpam/setup 40.41
59 TestErrorSpam/start 0.33
60 TestErrorSpam/status 0.71
61 TestErrorSpam/pause 1.48
62 TestErrorSpam/unpause 1.67
63 TestErrorSpam/stop 5.36
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 80.18
68 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/KubeContext 0.04
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.4
75 TestFunctional/serial/CacheCmd/cache/add_local 2.05
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
77 TestFunctional/serial/CacheCmd/cache/list 0.04
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.54
80 TestFunctional/serial/CacheCmd/cache/delete 0.09
85 TestFunctional/delete_echo-server_images 0
86 TestFunctional/delete_my-image_image 0
87 TestFunctional/delete_minikube_cached_images 0
92 TestMultiControlPlane/serial/StartCluster 194.25
93 TestMultiControlPlane/serial/DeployApp 6.77
94 TestMultiControlPlane/serial/PingHostFromPods 1.09
95 TestMultiControlPlane/serial/AddWorkerNode 54.56
96 TestMultiControlPlane/serial/NodeLabels 0.07
97 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.8
98 TestMultiControlPlane/serial/CopyFile 12.27
99 TestMultiControlPlane/serial/StopSecondaryNode 91.55
100 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.61
101 TestMultiControlPlane/serial/RestartSecondaryNode 52.41
102 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.83
103 TestMultiControlPlane/serial/RestartClusterKeepsNodes 434.08
104 TestMultiControlPlane/serial/DeleteSecondaryNode 18.05
105 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.6
106 TestMultiControlPlane/serial/StopCluster 272.85
107 TestMultiControlPlane/serial/RestartCluster 122.82
108 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.58
109 TestMultiControlPlane/serial/AddSecondaryNode 78.09
110 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.82
114 TestJSONOutput/start/Command 48.12
115 TestJSONOutput/start/Audit 0
117 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
118 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
120 TestJSONOutput/pause/Command 0.67
121 TestJSONOutput/pause/Audit 0
123 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
124 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
126 TestJSONOutput/unpause/Command 0.61
127 TestJSONOutput/unpause/Audit 0
129 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
130 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
132 TestJSONOutput/stop/Command 6.69
133 TestJSONOutput/stop/Audit 0
135 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
136 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
137 TestErrorJSONOutput 0.19
142 TestMainNoArgs 0.05
143 TestMinikubeProfile 83.07
146 TestMountStart/serial/StartWithMountFirst 24.94
147 TestMountStart/serial/VerifyMountFirst 0.37
148 TestMountStart/serial/StartWithMountSecond 27.8
149 TestMountStart/serial/VerifyMountSecond 0.37
150 TestMountStart/serial/DeleteFirst 0.67
151 TestMountStart/serial/VerifyMountPostDelete 0.37
152 TestMountStart/serial/Stop 1.27
153 TestMountStart/serial/RestartStopped 22.82
154 TestMountStart/serial/VerifyMountPostStop 0.35
157 TestMultiNode/serial/FreshStart2Nodes 111.66
158 TestMultiNode/serial/DeployApp2Nodes 7.01
159 TestMultiNode/serial/PingHostFrom2Pods 0.73
160 TestMultiNode/serial/AddNode 48.68
161 TestMultiNode/serial/MultiNodeLabels 0.06
162 TestMultiNode/serial/ProfileList 0.56
163 TestMultiNode/serial/CopyFile 6.96
164 TestMultiNode/serial/StopNode 2.23
165 TestMultiNode/serial/StartAfterStop 38.18
166 TestMultiNode/serial/RestartKeepsNodes 338.97
167 TestMultiNode/serial/DeleteNode 2.44
168 TestMultiNode/serial/StopMultiNode 181.83
169 TestMultiNode/serial/RestartMultiNode 114.58
170 TestMultiNode/serial/ValidateNameConflict 46.8
177 TestScheduledStopUnix 113.14
181 TestRunningBinaryUpgrade 228.27
186 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
187 TestNoKubernetes/serial/StartWithK8s 93.95
195 TestNetworkPlugins/group/false 2.9
207 TestPause/serial/Start 72.68
208 TestNoKubernetes/serial/StartWithStopK8s 70.62
209 TestPause/serial/SecondStartNoReconfiguration 40.81
210 TestNoKubernetes/serial/Start 32.01
211 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
212 TestNoKubernetes/serial/ProfileList 21.25
213 TestPause/serial/Pause 0.73
214 TestPause/serial/VerifyStatus 0.24
215 TestPause/serial/Unpause 0.9
216 TestPause/serial/PauseAgain 0.74
217 TestPause/serial/DeletePaused 1.02
218 TestPause/serial/VerifyDeletedResources 15.28
219 TestNoKubernetes/serial/Stop 1.28
220 TestNoKubernetes/serial/StartNoArgs 51.82
221 TestStoppedBinaryUpgrade/Setup 2.33
222 TestStoppedBinaryUpgrade/Upgrade 150.53
223 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
224 TestNetworkPlugins/group/auto/Start 132.07
225 TestStoppedBinaryUpgrade/MinikubeLogs 0.81
226 TestNetworkPlugins/group/kindnet/Start 65.34
227 TestNetworkPlugins/group/auto/KubeletFlags 0.21
228 TestNetworkPlugins/group/auto/NetCatPod 10.26
229 TestNetworkPlugins/group/auto/DNS 0.13
230 TestNetworkPlugins/group/auto/Localhost 0.11
231 TestNetworkPlugins/group/auto/HairPin 0.11
232 TestNetworkPlugins/group/calico/Start 79.03
233 TestNetworkPlugins/group/kindnet/ControllerPod 6
234 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
235 TestNetworkPlugins/group/kindnet/NetCatPod 10.24
236 TestNetworkPlugins/group/kindnet/DNS 0.16
237 TestNetworkPlugins/group/kindnet/Localhost 0.12
238 TestNetworkPlugins/group/kindnet/HairPin 0.11
239 TestNetworkPlugins/group/custom-flannel/Start 71.15
240 TestNetworkPlugins/group/enable-default-cni/Start 113.17
241 TestNetworkPlugins/group/calico/ControllerPod 6.01
242 TestNetworkPlugins/group/calico/KubeletFlags 0.2
243 TestNetworkPlugins/group/calico/NetCatPod 11.27
244 TestNetworkPlugins/group/calico/DNS 0.17
245 TestNetworkPlugins/group/calico/Localhost 0.15
246 TestNetworkPlugins/group/calico/HairPin 0.18
247 TestNetworkPlugins/group/flannel/Start 124.97
248 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.2
249 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.24
250 TestNetworkPlugins/group/custom-flannel/DNS 0.17
251 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
252 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
253 TestNetworkPlugins/group/bridge/Start 98.69
254 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
255 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.24
256 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
257 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
258 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
263 TestNetworkPlugins/group/flannel/ControllerPod 6.01
264 TestNetworkPlugins/group/flannel/KubeletFlags 0.19
265 TestNetworkPlugins/group/flannel/NetCatPod 10.2
266 TestNetworkPlugins/group/flannel/DNS 0.16
267 TestNetworkPlugins/group/flannel/Localhost 0.13
268 TestNetworkPlugins/group/flannel/HairPin 0.13
269 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
270 TestNetworkPlugins/group/bridge/NetCatPod 12.12
271 TestNetworkPlugins/group/bridge/DNS 0.15
272 TestNetworkPlugins/group/bridge/Localhost 0.12
273 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (23.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-681558 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-681558 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (23.443764348s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (23.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0408 22:45:28.573819   16314 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0408 22:45:28.573900   16314 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20501-9125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-681558
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-681558: exit status 85 (54.000445ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-681558 | jenkins | v1.35.0 | 08 Apr 25 22:45 UTC |          |
	|         | -p download-only-681558        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/08 22:45:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 22:45:05.173107   16326 out.go:345] Setting OutFile to fd 1 ...
	I0408 22:45:05.173199   16326 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 22:45:05.173207   16326 out.go:358] Setting ErrFile to fd 2...
	I0408 22:45:05.173211   16326 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 22:45:05.173365   16326 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20501-9125/.minikube/bin
	W0408 22:45:05.173471   16326 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20501-9125/.minikube/config/config.json: open /home/jenkins/minikube-integration/20501-9125/.minikube/config/config.json: no such file or directory
	I0408 22:45:05.174026   16326 out.go:352] Setting JSON to true
	I0408 22:45:05.174860   16326 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1650,"bootTime":1744150655,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 22:45:05.174960   16326 start.go:139] virtualization: kvm guest
	I0408 22:45:05.177223   16326 out.go:97] [download-only-681558] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0408 22:45:05.177336   16326 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20501-9125/.minikube/cache/preloaded-tarball: no such file or directory
	I0408 22:45:05.177395   16326 notify.go:220] Checking for updates...
	I0408 22:45:05.178642   16326 out.go:169] MINIKUBE_LOCATION=20501
	I0408 22:45:05.179895   16326 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 22:45:05.181150   16326 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20501-9125/kubeconfig
	I0408 22:45:05.182353   16326 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20501-9125/.minikube
	I0408 22:45:05.183510   16326 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0408 22:45:05.185723   16326 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0408 22:45:05.185925   16326 driver.go:404] Setting default libvirt URI to qemu:///system
	I0408 22:45:05.286795   16326 out.go:97] Using the kvm2 driver based on user configuration
	I0408 22:45:05.286828   16326 start.go:297] selected driver: kvm2
	I0408 22:45:05.286834   16326 start.go:901] validating driver "kvm2" against <nil>
	I0408 22:45:05.287288   16326 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 22:45:05.287439   16326 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20501-9125/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 22:45:05.302206   16326 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0408 22:45:05.302253   16326 start_flags.go:311] no existing cluster config was found, will generate one from the flags 
	I0408 22:45:05.303027   16326 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0408 22:45:05.303239   16326 start_flags.go:957] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 22:45:05.303280   16326 cni.go:84] Creating CNI manager for ""
	I0408 22:45:05.303339   16326 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 22:45:05.303351   16326 start_flags.go:320] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 22:45:05.303422   16326 start.go:340] cluster config:
	{Name:download-only-681558 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-681558 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 22:45:05.303667   16326 iso.go:125] acquiring lock: {Name:mk618477bad490b102618c53c9c8c6b34f33ce81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 22:45:05.305416   16326 out.go:97] Downloading VM boot image ...
	I0408 22:45:05.305449   16326 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20501-9125/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0408 22:45:15.109549   16326 out.go:97] Starting "download-only-681558" primary control-plane node in "download-only-681558" cluster
	I0408 22:45:15.109583   16326 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0408 22:45:15.209432   16326 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0408 22:45:15.209462   16326 cache.go:56] Caching tarball of preloaded images
	I0408 22:45:15.209707   16326 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0408 22:45:15.211332   16326 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0408 22:45:15.211351   16326 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0408 22:45:15.310610   16326 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20501-9125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-681558 host does not exist
	  To start a cluster, run: "minikube start -p download-only-681558"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-681558
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (13.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-967138 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-967138 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.019936447s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (13.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0408 22:45:41.903371   16314 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
I0408 22:45:41.903414   16314 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20501-9125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-967138
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-967138: exit status 85 (58.294269ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-681558 | jenkins | v1.35.0 | 08 Apr 25 22:45 UTC |                     |
	|         | -p download-only-681558        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 08 Apr 25 22:45 UTC | 08 Apr 25 22:45 UTC |
	| delete  | -p download-only-681558        | download-only-681558 | jenkins | v1.35.0 | 08 Apr 25 22:45 UTC | 08 Apr 25 22:45 UTC |
	| start   | -o=json --download-only        | download-only-967138 | jenkins | v1.35.0 | 08 Apr 25 22:45 UTC |                     |
	|         | -p download-only-967138        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/08 22:45:28
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 22:45:28.920531   16575 out.go:345] Setting OutFile to fd 1 ...
	I0408 22:45:28.920761   16575 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 22:45:28.920769   16575 out.go:358] Setting ErrFile to fd 2...
	I0408 22:45:28.920773   16575 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 22:45:28.920922   16575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20501-9125/.minikube/bin
	I0408 22:45:28.921477   16575 out.go:352] Setting JSON to true
	I0408 22:45:28.922265   16575 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1674,"bootTime":1744150655,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 22:45:28.922359   16575 start.go:139] virtualization: kvm guest
	I0408 22:45:28.924464   16575 out.go:97] [download-only-967138] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 22:45:28.924574   16575 notify.go:220] Checking for updates...
	I0408 22:45:28.926149   16575 out.go:169] MINIKUBE_LOCATION=20501
	I0408 22:45:28.927465   16575 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 22:45:28.928653   16575 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20501-9125/kubeconfig
	I0408 22:45:28.929861   16575 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20501-9125/.minikube
	I0408 22:45:28.931097   16575 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0408 22:45:28.933443   16575 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0408 22:45:28.933627   16575 driver.go:404] Setting default libvirt URI to qemu:///system
	I0408 22:45:28.964837   16575 out.go:97] Using the kvm2 driver based on user configuration
	I0408 22:45:28.964868   16575 start.go:297] selected driver: kvm2
	I0408 22:45:28.964873   16575 start.go:901] validating driver "kvm2" against <nil>
	I0408 22:45:28.965285   16575 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 22:45:28.965372   16575 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20501-9125/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 22:45:28.980327   16575 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0408 22:45:28.980402   16575 start_flags.go:311] no existing cluster config was found, will generate one from the flags 
	I0408 22:45:28.980976   16575 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0408 22:45:28.981105   16575 start_flags.go:957] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 22:45:28.981129   16575 cni.go:84] Creating CNI manager for ""
	I0408 22:45:28.981176   16575 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 22:45:28.981186   16575 start_flags.go:320] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 22:45:28.981234   16575 start.go:340] cluster config:
	{Name:download-only-967138 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:download-only-967138 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 22:45:28.981320   16575 iso.go:125] acquiring lock: {Name:mk618477bad490b102618c53c9c8c6b34f33ce81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 22:45:28.982882   16575 out.go:97] Starting "download-only-967138" primary control-plane node in "download-only-967138" cluster
	I0408 22:45:28.982899   16575 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0408 22:45:29.126459   16575 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0408 22:45:29.126485   16575 cache.go:56] Caching tarball of preloaded images
	I0408 22:45:29.126618   16575 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0408 22:45:29.128702   16575 out.go:97] Downloading Kubernetes v1.32.2 preload ...
	I0408 22:45:29.128720   16575 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 ...
	I0408 22:45:29.229137   16575 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:a1ce605168a895ad5f3b3c8db1fe4d66 -> /home/jenkins/minikube-integration/20501-9125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-967138 host does not exist
	  To start a cluster, run: "minikube start -p download-only-967138"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-967138
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I0408 22:45:42.474095   16314 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-634721 --alsologtostderr --binary-mirror http://127.0.0.1:33783 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-634721" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-634721
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (86.63s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-989638 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-989638 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m25.635165868s)
helpers_test.go:175: Cleaning up "offline-crio-989638" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-989638
--- PASS: TestOffline (86.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-355098
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-355098: exit status 85 (52.796323ms)

                                                
                                                
-- stdout --
	* Profile "addons-355098" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-355098"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-355098
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-355098: exit status 85 (51.071626ms)

                                                
                                                
-- stdout --
	* Profile "addons-355098" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-355098"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (134.19s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-355098 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-355098 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m14.192282536s)
--- PASS: TestAddons/Setup (134.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-355098 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-355098 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.47s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-355098 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-355098 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2668e738-92e7-49ab-a3d2-53c61273973a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2668e738-92e7-49ab-a3d2-53c61273973a] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.003033163s
addons_test.go:633: (dbg) Run:  kubectl --context addons-355098 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-355098 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-355098 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.47s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 21.728684ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-78d7v" [7430b453-dc57-4eca-89e7-132b388f3fb8] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00353932s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-g8xx6" [baa26201-896c-4f4d-ac83-9ee192e0cb9f] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003826291s
addons_test.go:331: (dbg) Run:  kubectl --context addons-355098 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-355098 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-355098 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.8820606s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-355098 ip
2025/04/08 22:48:33 [DEBUG] GET http://192.168.39.199:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-355098 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.77s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.9s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-qnps9" [ae30a75d-380c-4507-a6f7-78eda1db8a9a] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003218113s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-355098 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-355098 addons disable inspektor-gadget --alsologtostderr -v=1: (5.89464371s)
--- PASS: TestAddons/parallel/InspektorGadget (11.90s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.13s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 21.127571ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-4f8hj" [73c9a758-36b5-417d-acd3-24e45007b5ae] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003655562s
addons_test.go:402: (dbg) Run:  kubectl --context addons-355098 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-355098 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-355098 addons disable metrics-server --alsologtostderr -v=1: (1.043686392s)
--- PASS: TestAddons/parallel/MetricsServer (6.13s)

                                                
                                    
x
+
TestAddons/parallel/CSI (59.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0408 22:48:41.575554   16314 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0408 22:48:41.578625   16314 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0408 22:48:41.578645   16314 kapi.go:107] duration metric: took 3.106785ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 3.114533ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-355098 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-355098 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [5e93b66f-f80d-4ccb-bd0c-4097fe83f4f8] Pending
helpers_test.go:344: "task-pv-pod" [5e93b66f-f80d-4ccb-bd0c-4097fe83f4f8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [5e93b66f-f80d-4ccb-bd0c-4097fe83f4f8] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.003918394s
addons_test.go:511: (dbg) Run:  kubectl --context addons-355098 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-355098 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-355098 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-355098 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-355098 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-355098 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-355098 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [8df1f950-62f5-4a3a-ac01-bb003c91d28a] Pending
helpers_test.go:344: "task-pv-pod-restore" [8df1f950-62f5-4a3a-ac01-bb003c91d28a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [8df1f950-62f5-4a3a-ac01-bb003c91d28a] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.008681376s
addons_test.go:553: (dbg) Run:  kubectl --context addons-355098 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-355098 delete pod task-pv-pod-restore: (1.701518965s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-355098 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-355098 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-355098 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-355098 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-355098 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.917007053s)
--- PASS: TestAddons/parallel/CSI (59.54s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-355098 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-24dwb" [40e271a0-415b-402b-b8f4-b19492d4443c] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-24dwb" [40e271a0-415b-402b-b8f4-b19492d4443c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-24dwb" [40e271a0-415b-402b-b8f4-b19492d4443c] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003038517s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-355098 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-355098 addons disable headlamp --alsologtostderr -v=1: (5.824240476s)
--- PASS: TestAddons/parallel/Headlamp (18.70s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7dc7f9b5b8-48g8z" [58313983-95bf-4e02-bd52-861f53989bdf] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004055692s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-355098 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.43s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-355098 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-355098 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355098 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [e8cd7955-2a99-41ec-a230-80c43885bd44] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [e8cd7955-2a99-41ec-a230-80c43885bd44] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [e8cd7955-2a99-41ec-a230-80c43885bd44] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.003584958s
addons_test.go:906: (dbg) Run:  kubectl --context addons-355098 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-355098 ssh "cat /opt/local-path-provisioner/pvc-fb751989-87e0-4024-b7d5-3cb6b29c4ba8_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-355098 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-355098 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-355098 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-355098 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.554826168s)
--- PASS: TestAddons/parallel/LocalPath (56.43s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.57s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-rxkbz" [7c7ef4d7-3bef-4311-b3c2-6811114c281e] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.007390823s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-355098 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.57s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-mhncs" [2b251f90-3253-406d-b79a-25280d1347d7] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00369849s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-355098 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-355098 addons disable yakd --alsologtostderr -v=1: (5.838572229s)
--- PASS: TestAddons/parallel/Yakd (11.84s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.22s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-355098
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-355098: (1m30.942851681s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-355098
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-355098
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-355098
--- PASS: TestAddons/StoppedEnableDisable (91.22s)

                                                
                                    
x
+
TestCertOptions (85.82s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-534555 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-534555 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m23.889627842s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-534555 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-534555 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-534555 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-534555" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-534555
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-534555: (1.410421745s)
--- PASS: TestCertOptions (85.82s)

                                                
                                    
x
+
TestCertExpiration (276.25s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-242018 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-242018 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (49.487204228s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-242018 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-242018 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (45.137753443s)
helpers_test.go:175: Cleaning up "cert-expiration-242018" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-242018
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-242018: (1.620770094s)
--- PASS: TestCertExpiration (276.25s)

                                                
                                    
x
+
TestForceSystemdFlag (67.34s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-660305 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-660305 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m6.127575227s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-660305 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-660305" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-660305
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-660305: (1.020234411s)
--- PASS: TestForceSystemdFlag (67.34s)

                                                
                                    
x
+
TestForceSystemdEnv (69.89s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-033573 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-033573 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m9.134960422s)
helpers_test.go:175: Cleaning up "force-systemd-env-033573" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-033573
--- PASS: TestForceSystemdEnv (69.89s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.12s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0409 00:20:41.600824   16314 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0409 00:20:41.600952   16314 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0409 00:20:41.631731   16314 install.go:62] docker-machine-driver-kvm2: exit status 1
W0409 00:20:41.631881   16314 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0409 00:20:41.631951   16314 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1714014767/001/docker-machine-driver-kvm2
I0409 00:20:41.847770   16314 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1714014767/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c9a0 0x554c9a0 0x554c9a0 0x554c9a0 0x554c9a0 0x554c9a0 0x554c9a0] Decompressors:map[bz2:0xc0003fbae8 gz:0xc0003fbb70 tar:0xc0003fbb20 tar.bz2:0xc0003fbb30 tar.gz:0xc0003fbb40 tar.xz:0xc0003fbb50 tar.zst:0xc0003fbb60 tbz2:0xc0003fbb30 tgz:0xc0003fbb40 txz:0xc0003fbb50 tzst:0xc0003fbb60 xz:0xc0003fbb78 zip:0xc0003fbb80 zst:0xc0003fbb90] Getters:map[file:0xc001f7b260 http:0xc0004dd860 https:0xc0004dd8b0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0409 00:20:41.847825   16314 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1714014767/001/docker-machine-driver-kvm2
I0409 00:20:44.953619   16314 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0409 00:20:44.953713   16314 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0409 00:20:44.986095   16314 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0409 00:20:44.986126   16314 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0409 00:20:44.986184   16314 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0409 00:20:44.986207   16314 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1714014767/002/docker-machine-driver-kvm2
I0409 00:20:45.028096   16314 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1714014767/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c9a0 0x554c9a0 0x554c9a0 0x554c9a0 0x554c9a0 0x554c9a0 0x554c9a0] Decompressors:map[bz2:0xc0003fbae8 gz:0xc0003fbb70 tar:0xc0003fbb20 tar.bz2:0xc0003fbb30 tar.gz:0xc0003fbb40 tar.xz:0xc0003fbb50 tar.zst:0xc0003fbb60 tbz2:0xc0003fbb30 tgz:0xc0003fbb40 txz:0xc0003fbb50 tzst:0xc0003fbb60 xz:0xc0003fbb78 zip:0xc0003fbb80 zst:0xc0003fbb90] Getters:map[file:0xc0008c14b0 http:0xc000a043c0 https:0xc000a044b0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0409 00:20:45.028143   16314 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1714014767/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (5.12s)

                                                
                                    
x
+
TestErrorSpam/setup (40.41s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-715453 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-715453 --driver=kvm2  --container-runtime=crio
E0408 22:52:57.940258   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
E0408 22:52:57.946649   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
E0408 22:52:57.957939   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
E0408 22:52:57.979253   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
E0408 22:52:58.020614   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
E0408 22:52:58.102051   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
E0408 22:52:58.263523   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
E0408 22:52:58.585253   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
E0408 22:52:59.227296   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
E0408 22:53:00.508916   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
E0408 22:53:03.070699   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
E0408 22:53:08.192991   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
E0408 22:53:18.435082   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-715453 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-715453 --driver=kvm2  --container-runtime=crio: (40.412690458s)
--- PASS: TestErrorSpam/setup (40.41s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-715453 --log_dir /tmp/nospam-715453 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-715453 --log_dir /tmp/nospam-715453 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-715453 --log_dir /tmp/nospam-715453 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-715453 --log_dir /tmp/nospam-715453 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-715453 --log_dir /tmp/nospam-715453 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-715453 --log_dir /tmp/nospam-715453 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-715453 --log_dir /tmp/nospam-715453 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-715453 --log_dir /tmp/nospam-715453 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-715453 --log_dir /tmp/nospam-715453 pause
--- PASS: TestErrorSpam/pause (1.48s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.67s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-715453 --log_dir /tmp/nospam-715453 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-715453 --log_dir /tmp/nospam-715453 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-715453 --log_dir /tmp/nospam-715453 unpause
--- PASS: TestErrorSpam/unpause (1.67s)

                                                
                                    
x
+
TestErrorSpam/stop (5.36s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-715453 --log_dir /tmp/nospam-715453 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-715453 --log_dir /tmp/nospam-715453 stop: (1.568685494s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-715453 --log_dir /tmp/nospam-715453 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-715453 --log_dir /tmp/nospam-715453 stop: (1.835765737s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-715453 --log_dir /tmp/nospam-715453 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-715453 --log_dir /tmp/nospam-715453 stop: (1.956334592s)
--- PASS: TestErrorSpam/stop (5.36s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20501-9125/.minikube/files/etc/test/nested/copy/16314/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.18s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-546336 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0408 22:53:38.917083   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
E0408 22:54:19.879011   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-546336 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m20.182073456s)
--- PASS: TestFunctional/serial/StartWithProxy (80.18s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-546336 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-546336 cache add registry.k8s.io/pause:3.1: (1.149219268s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-546336 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-546336 cache add registry.k8s.io/pause:3.3: (1.12590784s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-546336 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-546336 cache add registry.k8s.io/pause:latest: (1.120289892s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-546336 /tmp/TestFunctionalserialCacheCmdcacheadd_local1842312174/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-546336 cache add minikube-local-cache-test:functional-546336
functional_test.go:1106: (dbg) Done: out/minikube-linux-amd64 -p functional-546336 cache add minikube-local-cache-test:functional-546336: (1.753156816s)
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-546336 cache delete minikube-local-cache-test:functional-546336
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-546336
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-546336 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-546336 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-546336 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-546336 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (202.142627ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-546336 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-546336 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Non-zero exit: docker rmi -f kicbase/echo-server:1.0: context deadline exceeded (946ns)
functional_test.go:209: failed to remove image "kicbase/echo-server:1.0" from docker images. args "docker rmi -f kicbase/echo-server:1.0": context deadline exceeded
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-546336
functional_test.go:207: (dbg) Non-zero exit: docker rmi -f kicbase/echo-server:functional-546336: context deadline exceeded (318ns)
functional_test.go:209: failed to remove image "kicbase/echo-server:functional-546336" from docker images. args "docker rmi -f kicbase/echo-server:functional-546336": context deadline exceeded
--- PASS: TestFunctional/delete_echo-server_images (0.00s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-546336
functional_test.go:215: (dbg) Non-zero exit: docker rmi -f localhost/my-image:functional-546336: context deadline exceeded (302ns)
functional_test.go:217: failed to remove image my-image from docker images. args "docker rmi -f localhost/my-image:functional-546336": context deadline exceeded
--- PASS: TestFunctional/delete_my-image_image (0.00s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-546336
functional_test.go:223: (dbg) Non-zero exit: docker rmi -f minikube-local-cache-test:functional-546336: context deadline exceeded (227ns)
functional_test.go:225: failed to remove image minikube local cache test images from docker. args "docker rmi -f minikube-local-cache-test:functional-546336": context deadline exceeded
--- PASS: TestFunctional/delete_minikube_cached_images (0.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (194.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-848504 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-848504 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m13.63811459s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (194.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-848504 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-848504 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-848504 -- rollout status deployment/busybox: (4.825412023s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-848504 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-848504 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-848504 -- exec busybox-58667487b6-tggkx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-848504 -- exec busybox-58667487b6-ts287 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-848504 -- exec busybox-58667487b6-z8zq9 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-848504 -- exec busybox-58667487b6-tggkx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-848504 -- exec busybox-58667487b6-ts287 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-848504 -- exec busybox-58667487b6-z8zq9 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-848504 -- exec busybox-58667487b6-tggkx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-848504 -- exec busybox-58667487b6-ts287 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-848504 -- exec busybox-58667487b6-z8zq9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-848504 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-848504 -- exec busybox-58667487b6-tggkx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-848504 -- exec busybox-58667487b6-tggkx -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-848504 -- exec busybox-58667487b6-ts287 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-848504 -- exec busybox-58667487b6-ts287 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-848504 -- exec busybox-58667487b6-z8zq9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-848504 -- exec busybox-58667487b6-z8zq9 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-848504 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-848504 -v=7 --alsologtostderr: (53.757785583s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-848504 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 cp testdata/cp-test.txt ha-848504:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 cp ha-848504:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1724897477/001/cp-test_ha-848504.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 cp ha-848504:/home/docker/cp-test.txt ha-848504-m02:/home/docker/cp-test_ha-848504_ha-848504-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504-m02 "sudo cat /home/docker/cp-test_ha-848504_ha-848504-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 cp ha-848504:/home/docker/cp-test.txt ha-848504-m03:/home/docker/cp-test_ha-848504_ha-848504-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504-m03 "sudo cat /home/docker/cp-test_ha-848504_ha-848504-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 cp ha-848504:/home/docker/cp-test.txt ha-848504-m04:/home/docker/cp-test_ha-848504_ha-848504-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504-m04 "sudo cat /home/docker/cp-test_ha-848504_ha-848504-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 cp testdata/cp-test.txt ha-848504-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 cp ha-848504-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1724897477/001/cp-test_ha-848504-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504-m02 "sudo cat /home/docker/cp-test.txt"
E0408 23:37:57.931750   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 cp ha-848504-m02:/home/docker/cp-test.txt ha-848504:/home/docker/cp-test_ha-848504-m02_ha-848504.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504 "sudo cat /home/docker/cp-test_ha-848504-m02_ha-848504.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 cp ha-848504-m02:/home/docker/cp-test.txt ha-848504-m03:/home/docker/cp-test_ha-848504-m02_ha-848504-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504-m03 "sudo cat /home/docker/cp-test_ha-848504-m02_ha-848504-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 cp ha-848504-m02:/home/docker/cp-test.txt ha-848504-m04:/home/docker/cp-test_ha-848504-m02_ha-848504-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504-m04 "sudo cat /home/docker/cp-test_ha-848504-m02_ha-848504-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 cp testdata/cp-test.txt ha-848504-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 cp ha-848504-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1724897477/001/cp-test_ha-848504-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 cp ha-848504-m03:/home/docker/cp-test.txt ha-848504:/home/docker/cp-test_ha-848504-m03_ha-848504.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504 "sudo cat /home/docker/cp-test_ha-848504-m03_ha-848504.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 cp ha-848504-m03:/home/docker/cp-test.txt ha-848504-m02:/home/docker/cp-test_ha-848504-m03_ha-848504-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504-m02 "sudo cat /home/docker/cp-test_ha-848504-m03_ha-848504-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 cp ha-848504-m03:/home/docker/cp-test.txt ha-848504-m04:/home/docker/cp-test_ha-848504-m03_ha-848504-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504-m04 "sudo cat /home/docker/cp-test_ha-848504-m03_ha-848504-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 cp testdata/cp-test.txt ha-848504-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 cp ha-848504-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1724897477/001/cp-test_ha-848504-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 cp ha-848504-m04:/home/docker/cp-test.txt ha-848504:/home/docker/cp-test_ha-848504-m04_ha-848504.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504 "sudo cat /home/docker/cp-test_ha-848504-m04_ha-848504.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 cp ha-848504-m04:/home/docker/cp-test.txt ha-848504-m02:/home/docker/cp-test_ha-848504-m04_ha-848504-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504-m02 "sudo cat /home/docker/cp-test_ha-848504-m04_ha-848504-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 cp ha-848504-m04:/home/docker/cp-test.txt ha-848504-m03:/home/docker/cp-test_ha-848504-m04_ha-848504-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 ssh -n ha-848504-m03 "sudo cat /home/docker/cp-test_ha-848504-m04_ha-848504-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-848504 node stop m02 -v=7 --alsologtostderr: (1m30.9572059s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-848504 status -v=7 --alsologtostderr: exit status 7 (591.830128ms)

                                                
                                                
-- stdout --
	ha-848504
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-848504-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-848504-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-848504-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 23:39:36.722817   35670 out.go:345] Setting OutFile to fd 1 ...
	I0408 23:39:36.723050   35670 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 23:39:36.723059   35670 out.go:358] Setting ErrFile to fd 2...
	I0408 23:39:36.723063   35670 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 23:39:36.723231   35670 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20501-9125/.minikube/bin
	I0408 23:39:36.723365   35670 out.go:352] Setting JSON to false
	I0408 23:39:36.723387   35670 mustload.go:65] Loading cluster: ha-848504
	I0408 23:39:36.723419   35670 notify.go:220] Checking for updates...
	I0408 23:39:36.723814   35670 config.go:182] Loaded profile config "ha-848504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 23:39:36.723836   35670 status.go:174] checking status of ha-848504 ...
	I0408 23:39:36.724322   35670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 23:39:36.724362   35670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 23:39:36.739723   35670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37435
	I0408 23:39:36.740139   35670 main.go:141] libmachine: () Calling .GetVersion
	I0408 23:39:36.740716   35670 main.go:141] libmachine: Using API Version  1
	I0408 23:39:36.740749   35670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 23:39:36.741073   35670 main.go:141] libmachine: () Calling .GetMachineName
	I0408 23:39:36.741263   35670 main.go:141] libmachine: (ha-848504) Calling .GetState
	I0408 23:39:36.742925   35670 status.go:371] ha-848504 host status = "Running" (err=<nil>)
	I0408 23:39:36.742954   35670 host.go:66] Checking if "ha-848504" exists ...
	I0408 23:39:36.743277   35670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 23:39:36.743337   35670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 23:39:36.758233   35670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45557
	I0408 23:39:36.758663   35670 main.go:141] libmachine: () Calling .GetVersion
	I0408 23:39:36.759099   35670 main.go:141] libmachine: Using API Version  1
	I0408 23:39:36.759121   35670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 23:39:36.759444   35670 main.go:141] libmachine: () Calling .GetMachineName
	I0408 23:39:36.759624   35670 main.go:141] libmachine: (ha-848504) Calling .GetIP
	I0408 23:39:36.762332   35670 main.go:141] libmachine: (ha-848504) DBG | domain ha-848504 has defined MAC address 52:54:00:31:4b:c3 in network mk-ha-848504
	I0408 23:39:36.762707   35670 main.go:141] libmachine: (ha-848504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:4b:c3", ip: ""} in network mk-ha-848504: {Iface:virbr1 ExpiryTime:2025-04-09 00:33:50 +0000 UTC Type:0 Mac:52:54:00:31:4b:c3 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-848504 Clientid:01:52:54:00:31:4b:c3}
	I0408 23:39:36.762745   35670 main.go:141] libmachine: (ha-848504) DBG | domain ha-848504 has defined IP address 192.168.39.145 and MAC address 52:54:00:31:4b:c3 in network mk-ha-848504
	I0408 23:39:36.762823   35670 host.go:66] Checking if "ha-848504" exists ...
	I0408 23:39:36.763124   35670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 23:39:36.763160   35670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 23:39:36.777165   35670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37821
	I0408 23:39:36.777551   35670 main.go:141] libmachine: () Calling .GetVersion
	I0408 23:39:36.777982   35670 main.go:141] libmachine: Using API Version  1
	I0408 23:39:36.778001   35670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 23:39:36.778362   35670 main.go:141] libmachine: () Calling .GetMachineName
	I0408 23:39:36.778542   35670 main.go:141] libmachine: (ha-848504) Calling .DriverName
	I0408 23:39:36.778714   35670 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 23:39:36.778734   35670 main.go:141] libmachine: (ha-848504) Calling .GetSSHHostname
	I0408 23:39:36.781211   35670 main.go:141] libmachine: (ha-848504) DBG | domain ha-848504 has defined MAC address 52:54:00:31:4b:c3 in network mk-ha-848504
	I0408 23:39:36.781601   35670 main.go:141] libmachine: (ha-848504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:4b:c3", ip: ""} in network mk-ha-848504: {Iface:virbr1 ExpiryTime:2025-04-09 00:33:50 +0000 UTC Type:0 Mac:52:54:00:31:4b:c3 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-848504 Clientid:01:52:54:00:31:4b:c3}
	I0408 23:39:36.781635   35670 main.go:141] libmachine: (ha-848504) DBG | domain ha-848504 has defined IP address 192.168.39.145 and MAC address 52:54:00:31:4b:c3 in network mk-ha-848504
	I0408 23:39:36.781710   35670 main.go:141] libmachine: (ha-848504) Calling .GetSSHPort
	I0408 23:39:36.782029   35670 main.go:141] libmachine: (ha-848504) Calling .GetSSHKeyPath
	I0408 23:39:36.782175   35670 main.go:141] libmachine: (ha-848504) Calling .GetSSHUsername
	I0408 23:39:36.782315   35670 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/ha-848504/id_rsa Username:docker}
	I0408 23:39:36.867796   35670 ssh_runner.go:195] Run: systemctl --version
	I0408 23:39:36.874464   35670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 23:39:36.889569   35670 kubeconfig.go:125] found "ha-848504" server: "https://192.168.39.254:8443"
	I0408 23:39:36.889604   35670 api_server.go:166] Checking apiserver status ...
	I0408 23:39:36.889640   35670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 23:39:36.903386   35670 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1090/cgroup
	W0408 23:39:36.911962   35670 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1090/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 23:39:36.912009   35670 ssh_runner.go:195] Run: ls
	I0408 23:39:36.917978   35670 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0408 23:39:36.922017   35670 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0408 23:39:36.922044   35670 status.go:463] ha-848504 apiserver status = Running (err=<nil>)
	I0408 23:39:36.922055   35670 status.go:176] ha-848504 status: &{Name:ha-848504 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 23:39:36.922078   35670 status.go:174] checking status of ha-848504-m02 ...
	I0408 23:39:36.922548   35670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 23:39:36.922594   35670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 23:39:36.937310   35670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32891
	I0408 23:39:36.937749   35670 main.go:141] libmachine: () Calling .GetVersion
	I0408 23:39:36.938219   35670 main.go:141] libmachine: Using API Version  1
	I0408 23:39:36.938235   35670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 23:39:36.938556   35670 main.go:141] libmachine: () Calling .GetMachineName
	I0408 23:39:36.938723   35670 main.go:141] libmachine: (ha-848504-m02) Calling .GetState
	I0408 23:39:36.940192   35670 status.go:371] ha-848504-m02 host status = "Stopped" (err=<nil>)
	I0408 23:39:36.940204   35670 status.go:384] host is not running, skipping remaining checks
	I0408 23:39:36.940210   35670 status.go:176] ha-848504-m02 status: &{Name:ha-848504-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 23:39:36.940224   35670 status.go:174] checking status of ha-848504-m03 ...
	I0408 23:39:36.940546   35670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 23:39:36.940585   35670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 23:39:36.954944   35670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41197
	I0408 23:39:36.955279   35670 main.go:141] libmachine: () Calling .GetVersion
	I0408 23:39:36.955637   35670 main.go:141] libmachine: Using API Version  1
	I0408 23:39:36.955667   35670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 23:39:36.956046   35670 main.go:141] libmachine: () Calling .GetMachineName
	I0408 23:39:36.956217   35670 main.go:141] libmachine: (ha-848504-m03) Calling .GetState
	I0408 23:39:36.957521   35670 status.go:371] ha-848504-m03 host status = "Running" (err=<nil>)
	I0408 23:39:36.957535   35670 host.go:66] Checking if "ha-848504-m03" exists ...
	I0408 23:39:36.957838   35670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 23:39:36.957877   35670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 23:39:36.971408   35670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33581
	I0408 23:39:36.971771   35670 main.go:141] libmachine: () Calling .GetVersion
	I0408 23:39:36.972137   35670 main.go:141] libmachine: Using API Version  1
	I0408 23:39:36.972156   35670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 23:39:36.972502   35670 main.go:141] libmachine: () Calling .GetMachineName
	I0408 23:39:36.972663   35670 main.go:141] libmachine: (ha-848504-m03) Calling .GetIP
	I0408 23:39:36.975170   35670 main.go:141] libmachine: (ha-848504-m03) DBG | domain ha-848504-m03 has defined MAC address 52:54:00:d0:a3:43 in network mk-ha-848504
	I0408 23:39:36.975562   35670 main.go:141] libmachine: (ha-848504-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:43", ip: ""} in network mk-ha-848504: {Iface:virbr1 ExpiryTime:2025-04-09 00:35:51 +0000 UTC Type:0 Mac:52:54:00:d0:a3:43 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-848504-m03 Clientid:01:52:54:00:d0:a3:43}
	I0408 23:39:36.975582   35670 main.go:141] libmachine: (ha-848504-m03) DBG | domain ha-848504-m03 has defined IP address 192.168.39.228 and MAC address 52:54:00:d0:a3:43 in network mk-ha-848504
	I0408 23:39:36.975710   35670 host.go:66] Checking if "ha-848504-m03" exists ...
	I0408 23:39:36.976128   35670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 23:39:36.976181   35670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 23:39:36.989851   35670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45201
	I0408 23:39:36.990258   35670 main.go:141] libmachine: () Calling .GetVersion
	I0408 23:39:36.990634   35670 main.go:141] libmachine: Using API Version  1
	I0408 23:39:36.990651   35670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 23:39:36.990911   35670 main.go:141] libmachine: () Calling .GetMachineName
	I0408 23:39:36.991065   35670 main.go:141] libmachine: (ha-848504-m03) Calling .DriverName
	I0408 23:39:36.991219   35670 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 23:39:36.991242   35670 main.go:141] libmachine: (ha-848504-m03) Calling .GetSSHHostname
	I0408 23:39:36.993728   35670 main.go:141] libmachine: (ha-848504-m03) DBG | domain ha-848504-m03 has defined MAC address 52:54:00:d0:a3:43 in network mk-ha-848504
	I0408 23:39:36.994165   35670 main.go:141] libmachine: (ha-848504-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:a3:43", ip: ""} in network mk-ha-848504: {Iface:virbr1 ExpiryTime:2025-04-09 00:35:51 +0000 UTC Type:0 Mac:52:54:00:d0:a3:43 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-848504-m03 Clientid:01:52:54:00:d0:a3:43}
	I0408 23:39:36.994198   35670 main.go:141] libmachine: (ha-848504-m03) DBG | domain ha-848504-m03 has defined IP address 192.168.39.228 and MAC address 52:54:00:d0:a3:43 in network mk-ha-848504
	I0408 23:39:36.994316   35670 main.go:141] libmachine: (ha-848504-m03) Calling .GetSSHPort
	I0408 23:39:36.994468   35670 main.go:141] libmachine: (ha-848504-m03) Calling .GetSSHKeyPath
	I0408 23:39:36.994627   35670 main.go:141] libmachine: (ha-848504-m03) Calling .GetSSHUsername
	I0408 23:39:36.994776   35670 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/ha-848504-m03/id_rsa Username:docker}
	I0408 23:39:37.071737   35670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 23:39:37.087812   35670 kubeconfig.go:125] found "ha-848504" server: "https://192.168.39.254:8443"
	I0408 23:39:37.087851   35670 api_server.go:166] Checking apiserver status ...
	I0408 23:39:37.087911   35670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 23:39:37.102517   35670 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1497/cgroup
	W0408 23:39:37.111294   35670 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1497/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 23:39:37.111344   35670 ssh_runner.go:195] Run: ls
	I0408 23:39:37.115241   35670 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0408 23:39:37.119442   35670 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0408 23:39:37.119459   35670 status.go:463] ha-848504-m03 apiserver status = Running (err=<nil>)
	I0408 23:39:37.119468   35670 status.go:176] ha-848504-m03 status: &{Name:ha-848504-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 23:39:37.119483   35670 status.go:174] checking status of ha-848504-m04 ...
	I0408 23:39:37.119817   35670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 23:39:37.119860   35670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 23:39:37.134593   35670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43289
	I0408 23:39:37.135018   35670 main.go:141] libmachine: () Calling .GetVersion
	I0408 23:39:37.135526   35670 main.go:141] libmachine: Using API Version  1
	I0408 23:39:37.135546   35670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 23:39:37.135853   35670 main.go:141] libmachine: () Calling .GetMachineName
	I0408 23:39:37.136047   35670 main.go:141] libmachine: (ha-848504-m04) Calling .GetState
	I0408 23:39:37.137472   35670 status.go:371] ha-848504-m04 host status = "Running" (err=<nil>)
	I0408 23:39:37.137487   35670 host.go:66] Checking if "ha-848504-m04" exists ...
	I0408 23:39:37.137867   35670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 23:39:37.137908   35670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 23:39:37.151900   35670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34551
	I0408 23:39:37.152368   35670 main.go:141] libmachine: () Calling .GetVersion
	I0408 23:39:37.152763   35670 main.go:141] libmachine: Using API Version  1
	I0408 23:39:37.152785   35670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 23:39:37.153109   35670 main.go:141] libmachine: () Calling .GetMachineName
	I0408 23:39:37.153283   35670 main.go:141] libmachine: (ha-848504-m04) Calling .GetIP
	I0408 23:39:37.156241   35670 main.go:141] libmachine: (ha-848504-m04) DBG | domain ha-848504-m04 has defined MAC address 52:54:00:14:7b:ee in network mk-ha-848504
	I0408 23:39:37.156605   35670 main.go:141] libmachine: (ha-848504-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:7b:ee", ip: ""} in network mk-ha-848504: {Iface:virbr1 ExpiryTime:2025-04-09 00:37:13 +0000 UTC Type:0 Mac:52:54:00:14:7b:ee Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-848504-m04 Clientid:01:52:54:00:14:7b:ee}
	I0408 23:39:37.156629   35670 main.go:141] libmachine: (ha-848504-m04) DBG | domain ha-848504-m04 has defined IP address 192.168.39.11 and MAC address 52:54:00:14:7b:ee in network mk-ha-848504
	I0408 23:39:37.156770   35670 host.go:66] Checking if "ha-848504-m04" exists ...
	I0408 23:39:37.157160   35670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 23:39:37.157210   35670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 23:39:37.172380   35670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45673
	I0408 23:39:37.172790   35670 main.go:141] libmachine: () Calling .GetVersion
	I0408 23:39:37.173193   35670 main.go:141] libmachine: Using API Version  1
	I0408 23:39:37.173218   35670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 23:39:37.173511   35670 main.go:141] libmachine: () Calling .GetMachineName
	I0408 23:39:37.173677   35670 main.go:141] libmachine: (ha-848504-m04) Calling .DriverName
	I0408 23:39:37.173846   35670 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 23:39:37.173873   35670 main.go:141] libmachine: (ha-848504-m04) Calling .GetSSHHostname
	I0408 23:39:37.176279   35670 main.go:141] libmachine: (ha-848504-m04) DBG | domain ha-848504-m04 has defined MAC address 52:54:00:14:7b:ee in network mk-ha-848504
	I0408 23:39:37.176712   35670 main.go:141] libmachine: (ha-848504-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:7b:ee", ip: ""} in network mk-ha-848504: {Iface:virbr1 ExpiryTime:2025-04-09 00:37:13 +0000 UTC Type:0 Mac:52:54:00:14:7b:ee Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-848504-m04 Clientid:01:52:54:00:14:7b:ee}
	I0408 23:39:37.176744   35670 main.go:141] libmachine: (ha-848504-m04) DBG | domain ha-848504-m04 has defined IP address 192.168.39.11 and MAC address 52:54:00:14:7b:ee in network mk-ha-848504
	I0408 23:39:37.176801   35670 main.go:141] libmachine: (ha-848504-m04) Calling .GetSSHPort
	I0408 23:39:37.176968   35670 main.go:141] libmachine: (ha-848504-m04) Calling .GetSSHKeyPath
	I0408 23:39:37.177111   35670 main.go:141] libmachine: (ha-848504-m04) Calling .GetSSHUsername
	I0408 23:39:37.177272   35670 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/ha-848504-m04/id_rsa Username:docker}
	I0408 23:39:37.255836   35670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 23:39:37.271307   35670 status.go:176] ha-848504-m04 status: &{Name:ha-848504-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (52.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-848504 node start m02 -v=7 --alsologtostderr: (51.527892628s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (52.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (434.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-848504 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-848504 -v=7 --alsologtostderr
E0408 23:42:41.013442   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
E0408 23:42:57.931775   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-848504 -v=7 --alsologtostderr: (4m34.102361119s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-848504 --wait=true -v=7 --alsologtostderr
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-848504 --wait=true -v=7 --alsologtostderr: (2m39.870788691s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-848504
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (434.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 node delete m03 -v=7 --alsologtostderr
E0408 23:47:57.938433   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-848504 node delete m03 -v=7 --alsologtostderr: (17.336754973s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-848504 stop -v=7 --alsologtostderr: (4m32.746257707s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-848504 status -v=7 --alsologtostderr: exit status 7 (100.391949ms)

                                                
                                                
-- stdout --
	ha-848504
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-848504-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-848504-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 23:52:36.644059   39930 out.go:345] Setting OutFile to fd 1 ...
	I0408 23:52:36.644304   39930 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 23:52:36.644312   39930 out.go:358] Setting ErrFile to fd 2...
	I0408 23:52:36.644316   39930 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 23:52:36.644480   39930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20501-9125/.minikube/bin
	I0408 23:52:36.644618   39930 out.go:352] Setting JSON to false
	I0408 23:52:36.644640   39930 mustload.go:65] Loading cluster: ha-848504
	I0408 23:52:36.644759   39930 notify.go:220] Checking for updates...
	I0408 23:52:36.644990   39930 config.go:182] Loaded profile config "ha-848504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 23:52:36.645007   39930 status.go:174] checking status of ha-848504 ...
	I0408 23:52:36.645378   39930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 23:52:36.645417   39930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 23:52:36.663974   39930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40295
	I0408 23:52:36.664421   39930 main.go:141] libmachine: () Calling .GetVersion
	I0408 23:52:36.664872   39930 main.go:141] libmachine: Using API Version  1
	I0408 23:52:36.664893   39930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 23:52:36.665358   39930 main.go:141] libmachine: () Calling .GetMachineName
	I0408 23:52:36.665533   39930 main.go:141] libmachine: (ha-848504) Calling .GetState
	I0408 23:52:36.667373   39930 status.go:371] ha-848504 host status = "Stopped" (err=<nil>)
	I0408 23:52:36.667387   39930 status.go:384] host is not running, skipping remaining checks
	I0408 23:52:36.667394   39930 status.go:176] ha-848504 status: &{Name:ha-848504 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 23:52:36.667440   39930 status.go:174] checking status of ha-848504-m02 ...
	I0408 23:52:36.667724   39930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 23:52:36.667764   39930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 23:52:36.681818   39930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42507
	I0408 23:52:36.682243   39930 main.go:141] libmachine: () Calling .GetVersion
	I0408 23:52:36.682663   39930 main.go:141] libmachine: Using API Version  1
	I0408 23:52:36.682679   39930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 23:52:36.682994   39930 main.go:141] libmachine: () Calling .GetMachineName
	I0408 23:52:36.683157   39930 main.go:141] libmachine: (ha-848504-m02) Calling .GetState
	I0408 23:52:36.684525   39930 status.go:371] ha-848504-m02 host status = "Stopped" (err=<nil>)
	I0408 23:52:36.684537   39930 status.go:384] host is not running, skipping remaining checks
	I0408 23:52:36.684543   39930 status.go:176] ha-848504-m02 status: &{Name:ha-848504-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 23:52:36.684559   39930 status.go:174] checking status of ha-848504-m04 ...
	I0408 23:52:36.684812   39930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 23:52:36.684874   39930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 23:52:36.699522   39930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41725
	I0408 23:52:36.699860   39930 main.go:141] libmachine: () Calling .GetVersion
	I0408 23:52:36.700222   39930 main.go:141] libmachine: Using API Version  1
	I0408 23:52:36.700235   39930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 23:52:36.700484   39930 main.go:141] libmachine: () Calling .GetMachineName
	I0408 23:52:36.700655   39930 main.go:141] libmachine: (ha-848504-m04) Calling .GetState
	I0408 23:52:36.701996   39930 status.go:371] ha-848504-m04 host status = "Stopped" (err=<nil>)
	I0408 23:52:36.702012   39930 status.go:384] host is not running, skipping remaining checks
	I0408 23:52:36.702018   39930 status.go:176] ha-848504-m04 status: &{Name:ha-848504-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (122.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-848504 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0408 23:52:57.932365   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-848504 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m2.081036382s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (122.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-848504 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-848504 --control-plane -v=7 --alsologtostderr: (1m17.27464808s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-848504 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                    
x
+
TestJSONOutput/start/Command (48.12s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-311392 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-311392 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (48.114983768s)
--- PASS: TestJSONOutput/start/Command (48.12s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-311392 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-311392 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.69s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-311392 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-311392 --output=json --user=testUser: (6.687906139s)
--- PASS: TestJSONOutput/stop/Command (6.69s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-902493 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-902493 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (57.383514ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"66cc7dc8-4c53-4faa-8ec6-d916ad5408e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-902493] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"104fe88a-c13a-4e41-9837-444f582fa3cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20501"}}
	{"specversion":"1.0","id":"fb8186d2-b0a7-4b0c-aa31-1b5bbd77660b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fdf28534-e2f2-4a0e-989d-126b15277b74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20501-9125/kubeconfig"}}
	{"specversion":"1.0","id":"b6482b18-a013-4c98-a23a-8dc31f560e4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20501-9125/.minikube"}}
	{"specversion":"1.0","id":"401c554e-6b8f-4d03-a0ef-c467c52eda58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2faccdef-5df8-43a2-9e56-b76209c2a96a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7c5f5c62-7e57-4d2b-adb3-0890c8ca633f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-902493" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-902493
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (83.07s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-942871 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-942871 --driver=kvm2  --container-runtime=crio: (39.583493533s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-954903 --driver=kvm2  --container-runtime=crio
E0408 23:57:57.931593   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-954903 --driver=kvm2  --container-runtime=crio: (40.675776225s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-942871
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-954903
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-954903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-954903
helpers_test.go:175: Cleaning up "first-942871" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-942871
--- PASS: TestMinikubeProfile (83.07s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-662407 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-662407 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.94122069s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-662407 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-662407 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.8s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-676741 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-676741 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.797063515s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-676741 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-676741 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-662407 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-676741 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-676741 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-676741
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-676741: (1.269738695s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.82s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-676741
E0408 23:59:21.016030   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-676741: (21.822190881s)
--- PASS: TestMountStart/serial/RestartStopped (22.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-676741 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-676741 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (111.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-072032 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-072032 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m51.258588702s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (111.66s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-072032 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-072032 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-072032 -- rollout status deployment/busybox: (5.652904198s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-072032 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-072032 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-072032 -- exec busybox-58667487b6-8kldb -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-072032 -- exec busybox-58667487b6-v69fr -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-072032 -- exec busybox-58667487b6-8kldb -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-072032 -- exec busybox-58667487b6-v69fr -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-072032 -- exec busybox-58667487b6-8kldb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-072032 -- exec busybox-58667487b6-v69fr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.01s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-072032 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-072032 -- exec busybox-58667487b6-8kldb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-072032 -- exec busybox-58667487b6-8kldb -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-072032 -- exec busybox-58667487b6-v69fr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-072032 -- exec busybox-58667487b6-v69fr -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-072032 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-072032 -v 3 --alsologtostderr: (48.121645948s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.68s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-072032 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.56s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 cp testdata/cp-test.txt multinode-072032:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 ssh -n multinode-072032 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 cp multinode-072032:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4269375463/001/cp-test_multinode-072032.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 ssh -n multinode-072032 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 cp multinode-072032:/home/docker/cp-test.txt multinode-072032-m02:/home/docker/cp-test_multinode-072032_multinode-072032-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 ssh -n multinode-072032 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 ssh -n multinode-072032-m02 "sudo cat /home/docker/cp-test_multinode-072032_multinode-072032-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 cp multinode-072032:/home/docker/cp-test.txt multinode-072032-m03:/home/docker/cp-test_multinode-072032_multinode-072032-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 ssh -n multinode-072032 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 ssh -n multinode-072032-m03 "sudo cat /home/docker/cp-test_multinode-072032_multinode-072032-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 cp testdata/cp-test.txt multinode-072032-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 ssh -n multinode-072032-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 cp multinode-072032-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4269375463/001/cp-test_multinode-072032-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 ssh -n multinode-072032-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 cp multinode-072032-m02:/home/docker/cp-test.txt multinode-072032:/home/docker/cp-test_multinode-072032-m02_multinode-072032.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 ssh -n multinode-072032-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 ssh -n multinode-072032 "sudo cat /home/docker/cp-test_multinode-072032-m02_multinode-072032.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 cp multinode-072032-m02:/home/docker/cp-test.txt multinode-072032-m03:/home/docker/cp-test_multinode-072032-m02_multinode-072032-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 ssh -n multinode-072032-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 ssh -n multinode-072032-m03 "sudo cat /home/docker/cp-test_multinode-072032-m02_multinode-072032-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 cp testdata/cp-test.txt multinode-072032-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 ssh -n multinode-072032-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 cp multinode-072032-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4269375463/001/cp-test_multinode-072032-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 ssh -n multinode-072032-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 cp multinode-072032-m03:/home/docker/cp-test.txt multinode-072032:/home/docker/cp-test_multinode-072032-m03_multinode-072032.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 ssh -n multinode-072032-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 ssh -n multinode-072032 "sudo cat /home/docker/cp-test_multinode-072032-m03_multinode-072032.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 cp multinode-072032-m03:/home/docker/cp-test.txt multinode-072032-m02:/home/docker/cp-test_multinode-072032-m03_multinode-072032-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 ssh -n multinode-072032-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 ssh -n multinode-072032-m02 "sudo cat /home/docker/cp-test_multinode-072032-m03_multinode-072032-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.96s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-072032 node stop m03: (1.394892262s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-072032 status: exit status 7 (414.708342ms)

                                                
                                                
-- stdout --
	multinode-072032
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-072032-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-072032-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-072032 status --alsologtostderr: exit status 7 (418.542818ms)

                                                
                                                
-- stdout --
	multinode-072032
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-072032-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-072032-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0409 00:02:39.452214   47679 out.go:345] Setting OutFile to fd 1 ...
	I0409 00:02:39.452300   47679 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0409 00:02:39.452305   47679 out.go:358] Setting ErrFile to fd 2...
	I0409 00:02:39.452309   47679 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0409 00:02:39.452528   47679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20501-9125/.minikube/bin
	I0409 00:02:39.452680   47679 out.go:352] Setting JSON to false
	I0409 00:02:39.452704   47679 mustload.go:65] Loading cluster: multinode-072032
	I0409 00:02:39.452813   47679 notify.go:220] Checking for updates...
	I0409 00:02:39.453090   47679 config.go:182] Loaded profile config "multinode-072032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0409 00:02:39.453110   47679 status.go:174] checking status of multinode-072032 ...
	I0409 00:02:39.453505   47679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0409 00:02:39.453547   47679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0409 00:02:39.468897   47679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41901
	I0409 00:02:39.469333   47679 main.go:141] libmachine: () Calling .GetVersion
	I0409 00:02:39.469795   47679 main.go:141] libmachine: Using API Version  1
	I0409 00:02:39.469813   47679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0409 00:02:39.470183   47679 main.go:141] libmachine: () Calling .GetMachineName
	I0409 00:02:39.470391   47679 main.go:141] libmachine: (multinode-072032) Calling .GetState
	I0409 00:02:39.471746   47679 status.go:371] multinode-072032 host status = "Running" (err=<nil>)
	I0409 00:02:39.471759   47679 host.go:66] Checking if "multinode-072032" exists ...
	I0409 00:02:39.472099   47679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0409 00:02:39.472139   47679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0409 00:02:39.487828   47679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33041
	I0409 00:02:39.488298   47679 main.go:141] libmachine: () Calling .GetVersion
	I0409 00:02:39.488715   47679 main.go:141] libmachine: Using API Version  1
	I0409 00:02:39.488738   47679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0409 00:02:39.489101   47679 main.go:141] libmachine: () Calling .GetMachineName
	I0409 00:02:39.489271   47679 main.go:141] libmachine: (multinode-072032) Calling .GetIP
	I0409 00:02:39.492338   47679 main.go:141] libmachine: (multinode-072032) DBG | domain multinode-072032 has defined MAC address 52:54:00:eb:43:55 in network mk-multinode-072032
	I0409 00:02:39.492695   47679 main.go:141] libmachine: (multinode-072032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:43:55", ip: ""} in network mk-multinode-072032: {Iface:virbr1 ExpiryTime:2025-04-09 00:59:56 +0000 UTC Type:0 Mac:52:54:00:eb:43:55 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:multinode-072032 Clientid:01:52:54:00:eb:43:55}
	I0409 00:02:39.492730   47679 main.go:141] libmachine: (multinode-072032) DBG | domain multinode-072032 has defined IP address 192.168.39.33 and MAC address 52:54:00:eb:43:55 in network mk-multinode-072032
	I0409 00:02:39.492878   47679 host.go:66] Checking if "multinode-072032" exists ...
	I0409 00:02:39.493166   47679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0409 00:02:39.493207   47679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0409 00:02:39.508955   47679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35575
	I0409 00:02:39.509435   47679 main.go:141] libmachine: () Calling .GetVersion
	I0409 00:02:39.509855   47679 main.go:141] libmachine: Using API Version  1
	I0409 00:02:39.509872   47679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0409 00:02:39.510130   47679 main.go:141] libmachine: () Calling .GetMachineName
	I0409 00:02:39.510281   47679 main.go:141] libmachine: (multinode-072032) Calling .DriverName
	I0409 00:02:39.510408   47679 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0409 00:02:39.510426   47679 main.go:141] libmachine: (multinode-072032) Calling .GetSSHHostname
	I0409 00:02:39.513134   47679 main.go:141] libmachine: (multinode-072032) DBG | domain multinode-072032 has defined MAC address 52:54:00:eb:43:55 in network mk-multinode-072032
	I0409 00:02:39.513551   47679 main.go:141] libmachine: (multinode-072032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:43:55", ip: ""} in network mk-multinode-072032: {Iface:virbr1 ExpiryTime:2025-04-09 00:59:56 +0000 UTC Type:0 Mac:52:54:00:eb:43:55 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:multinode-072032 Clientid:01:52:54:00:eb:43:55}
	I0409 00:02:39.513575   47679 main.go:141] libmachine: (multinode-072032) DBG | domain multinode-072032 has defined IP address 192.168.39.33 and MAC address 52:54:00:eb:43:55 in network mk-multinode-072032
	I0409 00:02:39.513705   47679 main.go:141] libmachine: (multinode-072032) Calling .GetSSHPort
	I0409 00:02:39.513869   47679 main.go:141] libmachine: (multinode-072032) Calling .GetSSHKeyPath
	I0409 00:02:39.514006   47679 main.go:141] libmachine: (multinode-072032) Calling .GetSSHUsername
	I0409 00:02:39.514126   47679 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/multinode-072032/id_rsa Username:docker}
	I0409 00:02:39.601752   47679 ssh_runner.go:195] Run: systemctl --version
	I0409 00:02:39.607681   47679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0409 00:02:39.625207   47679 kubeconfig.go:125] found "multinode-072032" server: "https://192.168.39.33:8443"
	I0409 00:02:39.625240   47679 api_server.go:166] Checking apiserver status ...
	I0409 00:02:39.625271   47679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0409 00:02:39.638826   47679 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1098/cgroup
	W0409 00:02:39.650301   47679 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1098/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0409 00:02:39.650350   47679 ssh_runner.go:195] Run: ls
	I0409 00:02:39.654677   47679 api_server.go:253] Checking apiserver healthz at https://192.168.39.33:8443/healthz ...
	I0409 00:02:39.659143   47679 api_server.go:279] https://192.168.39.33:8443/healthz returned 200:
	ok
	I0409 00:02:39.659164   47679 status.go:463] multinode-072032 apiserver status = Running (err=<nil>)
	I0409 00:02:39.659174   47679 status.go:176] multinode-072032 status: &{Name:multinode-072032 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0409 00:02:39.659196   47679 status.go:174] checking status of multinode-072032-m02 ...
	I0409 00:02:39.659509   47679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0409 00:02:39.659550   47679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0409 00:02:39.674525   47679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46661
	I0409 00:02:39.674972   47679 main.go:141] libmachine: () Calling .GetVersion
	I0409 00:02:39.675380   47679 main.go:141] libmachine: Using API Version  1
	I0409 00:02:39.675398   47679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0409 00:02:39.675720   47679 main.go:141] libmachine: () Calling .GetMachineName
	I0409 00:02:39.675921   47679 main.go:141] libmachine: (multinode-072032-m02) Calling .GetState
	I0409 00:02:39.677429   47679 status.go:371] multinode-072032-m02 host status = "Running" (err=<nil>)
	I0409 00:02:39.677445   47679 host.go:66] Checking if "multinode-072032-m02" exists ...
	I0409 00:02:39.677704   47679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0409 00:02:39.677735   47679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0409 00:02:39.692412   47679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45095
	I0409 00:02:39.692824   47679 main.go:141] libmachine: () Calling .GetVersion
	I0409 00:02:39.693266   47679 main.go:141] libmachine: Using API Version  1
	I0409 00:02:39.693287   47679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0409 00:02:39.693613   47679 main.go:141] libmachine: () Calling .GetMachineName
	I0409 00:02:39.693791   47679 main.go:141] libmachine: (multinode-072032-m02) Calling .GetIP
	I0409 00:02:39.696668   47679 main.go:141] libmachine: (multinode-072032-m02) DBG | domain multinode-072032-m02 has defined MAC address 52:54:00:89:4b:7d in network mk-multinode-072032
	I0409 00:02:39.697089   47679 main.go:141] libmachine: (multinode-072032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:4b:7d", ip: ""} in network mk-multinode-072032: {Iface:virbr1 ExpiryTime:2025-04-09 01:00:56 +0000 UTC Type:0 Mac:52:54:00:89:4b:7d Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:multinode-072032-m02 Clientid:01:52:54:00:89:4b:7d}
	I0409 00:02:39.697125   47679 main.go:141] libmachine: (multinode-072032-m02) DBG | domain multinode-072032-m02 has defined IP address 192.168.39.75 and MAC address 52:54:00:89:4b:7d in network mk-multinode-072032
	I0409 00:02:39.697286   47679 host.go:66] Checking if "multinode-072032-m02" exists ...
	I0409 00:02:39.697577   47679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0409 00:02:39.697628   47679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0409 00:02:39.713090   47679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41871
	I0409 00:02:39.713554   47679 main.go:141] libmachine: () Calling .GetVersion
	I0409 00:02:39.713992   47679 main.go:141] libmachine: Using API Version  1
	I0409 00:02:39.714020   47679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0409 00:02:39.714291   47679 main.go:141] libmachine: () Calling .GetMachineName
	I0409 00:02:39.714472   47679 main.go:141] libmachine: (multinode-072032-m02) Calling .DriverName
	I0409 00:02:39.714682   47679 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0409 00:02:39.714705   47679 main.go:141] libmachine: (multinode-072032-m02) Calling .GetSSHHostname
	I0409 00:02:39.717345   47679 main.go:141] libmachine: (multinode-072032-m02) DBG | domain multinode-072032-m02 has defined MAC address 52:54:00:89:4b:7d in network mk-multinode-072032
	I0409 00:02:39.717707   47679 main.go:141] libmachine: (multinode-072032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:4b:7d", ip: ""} in network mk-multinode-072032: {Iface:virbr1 ExpiryTime:2025-04-09 01:00:56 +0000 UTC Type:0 Mac:52:54:00:89:4b:7d Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:multinode-072032-m02 Clientid:01:52:54:00:89:4b:7d}
	I0409 00:02:39.717737   47679 main.go:141] libmachine: (multinode-072032-m02) DBG | domain multinode-072032-m02 has defined IP address 192.168.39.75 and MAC address 52:54:00:89:4b:7d in network mk-multinode-072032
	I0409 00:02:39.717872   47679 main.go:141] libmachine: (multinode-072032-m02) Calling .GetSSHPort
	I0409 00:02:39.718039   47679 main.go:141] libmachine: (multinode-072032-m02) Calling .GetSSHKeyPath
	I0409 00:02:39.718164   47679 main.go:141] libmachine: (multinode-072032-m02) Calling .GetSSHUsername
	I0409 00:02:39.718309   47679 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20501-9125/.minikube/machines/multinode-072032-m02/id_rsa Username:docker}
	I0409 00:02:39.794566   47679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0409 00:02:39.807894   47679 status.go:176] multinode-072032-m02 status: &{Name:multinode-072032-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0409 00:02:39.807953   47679 status.go:174] checking status of multinode-072032-m03 ...
	I0409 00:02:39.808330   47679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0409 00:02:39.808376   47679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0409 00:02:39.823321   47679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42305
	I0409 00:02:39.823888   47679 main.go:141] libmachine: () Calling .GetVersion
	I0409 00:02:39.824393   47679 main.go:141] libmachine: Using API Version  1
	I0409 00:02:39.824417   47679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0409 00:02:39.824771   47679 main.go:141] libmachine: () Calling .GetMachineName
	I0409 00:02:39.824991   47679 main.go:141] libmachine: (multinode-072032-m03) Calling .GetState
	I0409 00:02:39.826399   47679 status.go:371] multinode-072032-m03 host status = "Stopped" (err=<nil>)
	I0409 00:02:39.826413   47679 status.go:384] host is not running, skipping remaining checks
	I0409 00:02:39.826420   47679 status.go:176] multinode-072032-m03 status: &{Name:multinode-072032-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 node start m03 -v=7 --alsologtostderr
E0409 00:02:57.931792   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-072032 node start m03 -v=7 --alsologtostderr: (37.580647917s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (338.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-072032
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-072032
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-072032: (3m3.073729003s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-072032 --wait=true -v=8 --alsologtostderr
E0409 00:07:57.931705   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-072032 --wait=true -v=8 --alsologtostderr: (2m35.806848123s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-072032
--- PASS: TestMultiNode/serial/RestartKeepsNodes (338.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-072032 node delete m03: (1.937138011s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.44s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-072032 stop: (3m1.669822345s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-072032 status: exit status 7 (77.553991ms)

                                                
                                                
-- stdout --
	multinode-072032
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-072032-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-072032 status --alsologtostderr: exit status 7 (84.945913ms)

                                                
                                                
-- stdout --
	multinode-072032
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-072032-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0409 00:12:01.215230   50684 out.go:345] Setting OutFile to fd 1 ...
	I0409 00:12:01.215518   50684 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0409 00:12:01.215532   50684 out.go:358] Setting ErrFile to fd 2...
	I0409 00:12:01.215539   50684 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0409 00:12:01.215815   50684 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20501-9125/.minikube/bin
	I0409 00:12:01.216069   50684 out.go:352] Setting JSON to false
	I0409 00:12:01.216108   50684 mustload.go:65] Loading cluster: multinode-072032
	I0409 00:12:01.216234   50684 notify.go:220] Checking for updates...
	I0409 00:12:01.216613   50684 config.go:182] Loaded profile config "multinode-072032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0409 00:12:01.216640   50684 status.go:174] checking status of multinode-072032 ...
	I0409 00:12:01.217234   50684 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0409 00:12:01.217296   50684 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0409 00:12:01.235895   50684 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36155
	I0409 00:12:01.236346   50684 main.go:141] libmachine: () Calling .GetVersion
	I0409 00:12:01.236856   50684 main.go:141] libmachine: Using API Version  1
	I0409 00:12:01.236884   50684 main.go:141] libmachine: () Calling .SetConfigRaw
	I0409 00:12:01.237307   50684 main.go:141] libmachine: () Calling .GetMachineName
	I0409 00:12:01.237505   50684 main.go:141] libmachine: (multinode-072032) Calling .GetState
	I0409 00:12:01.239091   50684 status.go:371] multinode-072032 host status = "Stopped" (err=<nil>)
	I0409 00:12:01.239103   50684 status.go:384] host is not running, skipping remaining checks
	I0409 00:12:01.239109   50684 status.go:176] multinode-072032 status: &{Name:multinode-072032 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0409 00:12:01.239130   50684 status.go:174] checking status of multinode-072032-m02 ...
	I0409 00:12:01.239415   50684 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0409 00:12:01.239454   50684 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0409 00:12:01.253823   50684 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45577
	I0409 00:12:01.254214   50684 main.go:141] libmachine: () Calling .GetVersion
	I0409 00:12:01.254563   50684 main.go:141] libmachine: Using API Version  1
	I0409 00:12:01.254600   50684 main.go:141] libmachine: () Calling .SetConfigRaw
	I0409 00:12:01.254871   50684 main.go:141] libmachine: () Calling .GetMachineName
	I0409 00:12:01.255044   50684 main.go:141] libmachine: (multinode-072032-m02) Calling .GetState
	I0409 00:12:01.256666   50684 status.go:371] multinode-072032-m02 host status = "Stopped" (err=<nil>)
	I0409 00:12:01.256691   50684 status.go:384] host is not running, skipping remaining checks
	I0409 00:12:01.256696   50684 status.go:176] multinode-072032-m02 status: &{Name:multinode-072032-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (114.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-072032 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0409 00:12:57.932514   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-072032 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m54.082483499s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-072032 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (114.58s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-072032
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-072032-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-072032-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (57.020613ms)

                                                
                                                
-- stdout --
	* [multinode-072032-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20501
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20501-9125/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20501-9125/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-072032-m02' is duplicated with machine name 'multinode-072032-m02' in profile 'multinode-072032'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-072032-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-072032-m03 --driver=kvm2  --container-runtime=crio: (45.747149556s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-072032
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-072032: exit status 80 (210.773383ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-072032 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-072032-m03 already exists in multinode-072032-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-072032-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.80s)

                                                
                                    
x
+
TestScheduledStopUnix (113.14s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-207437 --memory=2048 --driver=kvm2  --container-runtime=crio
E0409 00:17:57.934386   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-207437 --memory=2048 --driver=kvm2  --container-runtime=crio: (41.581600966s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-207437 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-207437 -n scheduled-stop-207437
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-207437 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0409 00:18:11.139410   16314 retry.go:31] will retry after 72.39µs: open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/scheduled-stop-207437/pid: no such file or directory
I0409 00:18:11.140581   16314 retry.go:31] will retry after 186.857µs: open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/scheduled-stop-207437/pid: no such file or directory
I0409 00:18:11.141713   16314 retry.go:31] will retry after 205.4µs: open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/scheduled-stop-207437/pid: no such file or directory
I0409 00:18:11.142859   16314 retry.go:31] will retry after 475.454µs: open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/scheduled-stop-207437/pid: no such file or directory
I0409 00:18:11.144004   16314 retry.go:31] will retry after 652.966µs: open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/scheduled-stop-207437/pid: no such file or directory
I0409 00:18:11.145153   16314 retry.go:31] will retry after 1.049052ms: open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/scheduled-stop-207437/pid: no such file or directory
I0409 00:18:11.146337   16314 retry.go:31] will retry after 686.203µs: open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/scheduled-stop-207437/pid: no such file or directory
I0409 00:18:11.147502   16314 retry.go:31] will retry after 1.875962ms: open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/scheduled-stop-207437/pid: no such file or directory
I0409 00:18:11.149694   16314 retry.go:31] will retry after 2.868459ms: open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/scheduled-stop-207437/pid: no such file or directory
I0409 00:18:11.152885   16314 retry.go:31] will retry after 4.752498ms: open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/scheduled-stop-207437/pid: no such file or directory
I0409 00:18:11.158090   16314 retry.go:31] will retry after 8.078695ms: open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/scheduled-stop-207437/pid: no such file or directory
I0409 00:18:11.166241   16314 retry.go:31] will retry after 7.707639ms: open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/scheduled-stop-207437/pid: no such file or directory
I0409 00:18:11.174470   16314 retry.go:31] will retry after 9.666811ms: open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/scheduled-stop-207437/pid: no such file or directory
I0409 00:18:11.184728   16314 retry.go:31] will retry after 24.623989ms: open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/scheduled-stop-207437/pid: no such file or directory
I0409 00:18:11.210039   16314 retry.go:31] will retry after 43.566623ms: open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/scheduled-stop-207437/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-207437 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-207437 -n scheduled-stop-207437
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-207437
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-207437 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-207437
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-207437: exit status 7 (61.371661ms)

                                                
                                                
-- stdout --
	scheduled-stop-207437
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-207437 -n scheduled-stop-207437
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-207437 -n scheduled-stop-207437: exit status 7 (60.191024ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-207437" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-207437
--- PASS: TestScheduledStopUnix (113.14s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (228.27s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1212913770 start -p running-upgrade-013434 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1212913770 start -p running-upgrade-013434 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m6.450829756s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-013434 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-013434 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m37.600546693s)
helpers_test.go:175: Cleaning up "running-upgrade-013434" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-013434
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-013434: (1.903472023s)
--- PASS: TestRunningBinaryUpgrade (228.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-006125 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-006125 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (75.654352ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-006125] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20501
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20501-9125/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20501-9125/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (93.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-006125 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-006125 --driver=kvm2  --container-runtime=crio: (1m33.717060824s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-006125 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (93.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-459514 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-459514 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (101.814549ms)

                                                
                                                
-- stdout --
	* [false-459514] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20501
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20501-9125/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20501-9125/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0409 00:20:35.011114   56219 out.go:345] Setting OutFile to fd 1 ...
	I0409 00:20:35.011374   56219 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0409 00:20:35.011384   56219 out.go:358] Setting ErrFile to fd 2...
	I0409 00:20:35.011388   56219 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0409 00:20:35.011580   56219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20501-9125/.minikube/bin
	I0409 00:20:35.012163   56219 out.go:352] Setting JSON to false
	I0409 00:20:35.013183   56219 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7380,"bootTime":1744150655,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0409 00:20:35.013272   56219 start.go:139] virtualization: kvm guest
	I0409 00:20:35.015301   56219 out.go:177] * [false-459514] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0409 00:20:35.016442   56219 notify.go:220] Checking for updates...
	I0409 00:20:35.016472   56219 out.go:177]   - MINIKUBE_LOCATION=20501
	I0409 00:20:35.017796   56219 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0409 00:20:35.019044   56219 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20501-9125/kubeconfig
	I0409 00:20:35.020271   56219 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20501-9125/.minikube
	I0409 00:20:35.021319   56219 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0409 00:20:35.022494   56219 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0409 00:20:35.024049   56219 config.go:182] Loaded profile config "NoKubernetes-006125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0409 00:20:35.024233   56219 config.go:182] Loaded profile config "offline-crio-989638": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0409 00:20:35.024326   56219 config.go:182] Loaded profile config "running-upgrade-013434": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0409 00:20:35.024422   56219 driver.go:404] Setting default libvirt URI to qemu:///system
	I0409 00:20:35.064485   56219 out.go:177] * Using the kvm2 driver based on user configuration
	I0409 00:20:35.065451   56219 start.go:297] selected driver: kvm2
	I0409 00:20:35.065461   56219 start.go:901] validating driver "kvm2" against <nil>
	I0409 00:20:35.065471   56219 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0409 00:20:35.067046   56219 out.go:201] 
	W0409 00:20:35.068230   56219 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0409 00:20:35.069253   56219 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-459514 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-459514

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-459514

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-459514

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-459514

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-459514

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-459514

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-459514

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-459514

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-459514

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-459514

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-459514

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-459514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-459514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-459514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-459514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-459514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-459514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-459514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-459514" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-459514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-459514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-459514" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 09 Apr 2025 00:20:09 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.39.43:8443
name: offline-crio-989638
contexts:
- context:
cluster: offline-crio-989638
extensions:
- extension:
last-update: Wed, 09 Apr 2025 00:20:09 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: offline-crio-989638
name: offline-crio-989638
current-context: ""
kind: Config
preferences: {}
users:
- name: offline-crio-989638
user:
client-certificate: /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/offline-crio-989638/client.crt
client-key: /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/offline-crio-989638/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-459514

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459514"

                                                
                                                
----------------------- debugLogs end: false-459514 [took: 2.633489788s] --------------------------------
helpers_test.go:175: Cleaning up "false-459514" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-459514
--- PASS: TestNetworkPlugins/group/false (2.90s)

                                                
                                    
x
+
TestPause/serial/Start (72.68s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-378921 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-378921 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m12.678975359s)
--- PASS: TestPause/serial/Start (72.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (70.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-006125 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-006125 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m9.583780343s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-006125 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-006125 status -o json: exit status 2 (225.462292ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-006125","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-006125
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (70.62s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.81s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-378921 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-378921 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.784457351s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (40.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (32.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-006125 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-006125 --no-kubernetes --driver=kvm2  --container-runtime=crio: (32.010525078s)
--- PASS: TestNoKubernetes/serial/Start (32.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-006125 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-006125 "sudo systemctl is-active --quiet service kubelet": exit status 1 (197.579117ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (21.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (12.65381142s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
E0409 00:22:57.931489   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (8.598690725s)
--- PASS: TestNoKubernetes/serial/ProfileList (21.25s)

                                                
                                    
x
+
TestPause/serial/Pause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-378921 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.73s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-378921 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-378921 --output=json --layout=cluster: exit status 2 (234.656319ms)

                                                
                                                
-- stdout --
	{"Name":"pause-378921","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-378921","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.24s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.9s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-378921 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.90s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.74s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-378921 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.74s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.02s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-378921 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-378921 --alsologtostderr -v=5: (1.016232411s)
--- PASS: TestPause/serial/DeletePaused (1.02s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (15.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (15.284558397s)
--- PASS: TestPause/serial/VerifyDeletedResources (15.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-006125
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-006125: (1.284152678s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (51.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-006125 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-006125 --driver=kvm2  --container-runtime=crio: (51.818293975s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (51.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (150.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2044710586 start -p stopped-upgrade-363740 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2044710586 start -p stopped-upgrade-363740 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m38.591716285s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2044710586 -p stopped-upgrade-363740 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2044710586 -p stopped-upgrade-363740 stop: (11.468195617s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-363740 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-363740 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.465292012s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (150.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-006125 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-006125 "sudo systemctl is-active --quiet service kubelet": exit status 1 (191.31332ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (132.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-459514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-459514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (2m12.072263521s)
--- PASS: TestNetworkPlugins/group/auto/Start (132.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-363740
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (65.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-459514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-459514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m5.335183462s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (65.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-459514 "pgrep -a kubelet"
I0409 00:26:06.768873   16314 config.go:182] Loaded profile config "auto-459514": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-459514 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-jn6w5" [9d7479ad-4e2d-4a09-aadf-cd52228e8fbb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-jn6w5" [9d7479ad-4e2d-4a09-aadf-cd52228e8fbb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004364594s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-459514 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-459514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-459514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (79.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-459514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-459514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m19.033868782s)
--- PASS: TestNetworkPlugins/group/calico/Start (79.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-6vvgp" [b6080909-f7a8-49db-ae26-9516a6cba8bc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003032685s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-459514 "pgrep -a kubelet"
I0409 00:27:08.244900   16314 config.go:182] Loaded profile config "kindnet-459514": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-459514 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-wqlml" [863af30c-e04d-4b5d-99ae-6ad1ff5b6d55] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-wqlml" [863af30c-e04d-4b5d-99ae-6ad1ff5b6d55] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004330047s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-459514 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-459514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-459514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (71.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-459514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-459514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m11.148163302s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (71.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (113.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-459514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-459514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m53.172189197s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (113.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-nwb7h" [27136b7c-719c-4a80-b9f2-141dca113bfe] Running
E0409 00:27:57.932437   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/addons-355098/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.0043794s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-459514 "pgrep -a kubelet"
I0409 00:27:58.627836   16314 config.go:182] Loaded profile config "calico-459514": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-459514 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-px75x" [3f7d6208-d270-4d44-9fa7-8afd56298d6b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-px75x" [3f7d6208-d270-4d44-9fa7-8afd56298d6b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.070443947s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-459514 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-459514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-459514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (124.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-459514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-459514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (2m4.970951333s)
--- PASS: TestNetworkPlugins/group/flannel/Start (124.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-459514 "pgrep -a kubelet"
I0409 00:28:46.158936   16314 config.go:182] Loaded profile config "custom-flannel-459514": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-459514 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-lpxbw" [7eabd57f-527a-4d09-9ef0-a6aa4b6bfd64] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-lpxbw" [7eabd57f-527a-4d09-9ef0-a6aa4b6bfd64] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004517735s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-459514 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-459514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-459514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (98.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-459514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-459514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m38.693424s)
--- PASS: TestNetworkPlugins/group/bridge/Start (98.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-459514 "pgrep -a kubelet"
I0409 00:29:28.888135   16314 config.go:182] Loaded profile config "enable-default-cni-459514": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-459514 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-jlndj" [96f737c8-59ff-4776-9b3f-7e35918ae314] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-jlndj" [96f737c8-59ff-4776-9b3f-7e35918ae314] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004340963s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-459514 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-459514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-459514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-fxhl4" [9d47b154-8c15-4194-a7f3-ce49de35e7c4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004038929s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-459514 "pgrep -a kubelet"
I0409 00:30:37.171389   16314 config.go:182] Loaded profile config "flannel-459514": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-459514 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-8bzxx" [09674d97-1c5a-4e11-bb24-8edc040a5b00] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-8bzxx" [09674d97-1c5a-4e11-bb24-8edc040a5b00] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004342043s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-459514 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-459514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-459514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-459514 "pgrep -a kubelet"
I0409 00:30:52.034363   16314 config.go:182] Loaded profile config "bridge-459514": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-459514 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context bridge-459514 replace --force -f testdata/netcat-deployment.yaml: (1.051205808s)
I0409 00:30:53.131128   16314 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-jhz7h" [aec3d392-9ba5-4abd-9dfb-407291cd7d52] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-jhz7h" [aec3d392-9ba5-4abd-9dfb-407291cd7d52] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.00338739s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-459514 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-459514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-459514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)
E0409 00:39:56.817684   16314 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/enable-default-cni-459514/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    

Test skip (29/220)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-355098 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:631: 
----------------------- debugLogs start: kubenet-459514 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-459514

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-459514

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-459514

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-459514

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-459514

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-459514

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-459514

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-459514

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-459514

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-459514

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-459514

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-459514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-459514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-459514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-459514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-459514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-459514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-459514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-459514" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-459514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-459514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-459514" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 09 Apr 2025 00:20:09 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.39.43:8443
name: offline-crio-989638
contexts:
- context:
cluster: offline-crio-989638
extensions:
- extension:
last-update: Wed, 09 Apr 2025 00:20:09 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: offline-crio-989638
name: offline-crio-989638
current-context: ""
kind: Config
preferences: {}
users:
- name: offline-crio-989638
user:
client-certificate: /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/offline-crio-989638/client.crt
client-key: /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/offline-crio-989638/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-459514

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459514"

                                                
                                                
----------------------- debugLogs end: kubenet-459514 [took: 2.579771581s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-459514" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-459514
--- SKIP: TestNetworkPlugins/group/kubenet (2.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-459514 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-459514

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-459514

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-459514

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-459514

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-459514

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-459514

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-459514

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-459514

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-459514

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-459514

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-459514

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-459514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-459514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-459514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-459514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-459514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-459514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-459514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-459514" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-459514

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-459514

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-459514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-459514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-459514

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-459514

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-459514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-459514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-459514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-459514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-459514" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20501-9125/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 09 Apr 2025 00:20:09 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.39.43:8443
name: offline-crio-989638
contexts:
- context:
cluster: offline-crio-989638
extensions:
- extension:
last-update: Wed, 09 Apr 2025 00:20:09 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: offline-crio-989638
name: offline-crio-989638
current-context: ""
kind: Config
preferences: {}
users:
- name: offline-crio-989638
user:
client-certificate: /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/offline-crio-989638/client.crt
client-key: /home/jenkins/minikube-integration/20501-9125/.minikube/profiles/offline-crio-989638/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-459514

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-459514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459514"

                                                
                                                
----------------------- debugLogs end: cilium-459514 [took: 3.56721441s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-459514" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-459514
--- SKIP: TestNetworkPlugins/group/cilium (3.73s)

                                                
                                    
Copied to clipboard