Test Report: KVM_Linux_crio 21997

                    
                      4e6ec0ce1ba9ad510ab2048b3373e13c9f965153:2025-12-05:42642
                    
                

Test fail (5/437)

Order failed test Duration
46 TestAddons/parallel/Ingress 156.34
59 TestCertExpiration 1074.5
135 TestFunctional/parallel/ImageCommands/ImageListShort 2.23
139 TestFunctional/parallel/ImageCommands/ImageBuild 6.83
345 TestPreload 117.06
x
+
TestAddons/parallel/Ingress (156.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-704432 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-704432 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-704432 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [45ff2b50-5b5e-4ab0-b6be-6d89182ace3e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [45ff2b50-5b5e-4ab0-b6be-6d89182ace3e] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004629425s
I1205 06:09:22.905828   16702 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-704432 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-704432 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.639837992s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-704432 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-704432 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.31
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-704432 -n addons-704432
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-704432 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-704432 logs -n 25: (1.147943271s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-826602                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-826602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │ 05 Dec 25 06:05 UTC │
	│ start   │ --download-only -p binary-mirror-869778 --alsologtostderr --binary-mirror http://127.0.0.1:35975 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-869778 │ jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │                     │
	│ delete  │ -p binary-mirror-869778                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-869778 │ jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │ 05 Dec 25 06:05 UTC │
	│ addons  │ disable dashboard -p addons-704432                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-704432        │ jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │                     │
	│ addons  │ enable dashboard -p addons-704432                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-704432        │ jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │                     │
	│ start   │ -p addons-704432 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-704432        │ jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │ 05 Dec 25 06:08 UTC │
	│ addons  │ addons-704432 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-704432        │ jenkins │ v1.37.0 │ 05 Dec 25 06:08 UTC │ 05 Dec 25 06:08 UTC │
	│ addons  │ addons-704432 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-704432        │ jenkins │ v1.37.0 │ 05 Dec 25 06:08 UTC │ 05 Dec 25 06:08 UTC │
	│ addons  │ addons-704432 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-704432        │ jenkins │ v1.37.0 │ 05 Dec 25 06:08 UTC │ 05 Dec 25 06:08 UTC │
	│ ssh     │ addons-704432 ssh cat /opt/local-path-provisioner/pvc-dfd46569-e5e3-46ac-8dd9-36ab90471008_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-704432        │ jenkins │ v1.37.0 │ 05 Dec 25 06:09 UTC │ 05 Dec 25 06:09 UTC │
	│ addons  │ addons-704432 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-704432        │ jenkins │ v1.37.0 │ 05 Dec 25 06:09 UTC │ 05 Dec 25 06:09 UTC │
	│ ip      │ addons-704432 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-704432        │ jenkins │ v1.37.0 │ 05 Dec 25 06:09 UTC │ 05 Dec 25 06:09 UTC │
	│ addons  │ addons-704432 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-704432        │ jenkins │ v1.37.0 │ 05 Dec 25 06:09 UTC │ 05 Dec 25 06:09 UTC │
	│ addons  │ addons-704432 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-704432        │ jenkins │ v1.37.0 │ 05 Dec 25 06:09 UTC │ 05 Dec 25 06:09 UTC │
	│ addons  │ addons-704432 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-704432        │ jenkins │ v1.37.0 │ 05 Dec 25 06:09 UTC │ 05 Dec 25 06:09 UTC │
	│ addons  │ addons-704432 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-704432        │ jenkins │ v1.37.0 │ 05 Dec 25 06:09 UTC │ 05 Dec 25 06:09 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-704432                                                                                                                                                                                                                                                                                                                                                                                         │ addons-704432        │ jenkins │ v1.37.0 │ 05 Dec 25 06:09 UTC │ 05 Dec 25 06:09 UTC │
	│ addons  │ addons-704432 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-704432        │ jenkins │ v1.37.0 │ 05 Dec 25 06:09 UTC │ 05 Dec 25 06:09 UTC │
	│ addons  │ enable headlamp -p addons-704432 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-704432        │ jenkins │ v1.37.0 │ 05 Dec 25 06:09 UTC │ 05 Dec 25 06:09 UTC │
	│ ssh     │ addons-704432 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-704432        │ jenkins │ v1.37.0 │ 05 Dec 25 06:09 UTC │                     │
	│ addons  │ addons-704432 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-704432        │ jenkins │ v1.37.0 │ 05 Dec 25 06:09 UTC │ 05 Dec 25 06:09 UTC │
	│ addons  │ addons-704432 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-704432        │ jenkins │ v1.37.0 │ 05 Dec 25 06:10 UTC │ 05 Dec 25 06:10 UTC │
	│ addons  │ addons-704432 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-704432        │ jenkins │ v1.37.0 │ 05 Dec 25 06:10 UTC │ 05 Dec 25 06:10 UTC │
	│ addons  │ addons-704432 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-704432        │ jenkins │ v1.37.0 │ 05 Dec 25 06:10 UTC │ 05 Dec 25 06:10 UTC │
	│ ip      │ addons-704432 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-704432        │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │ 05 Dec 25 06:11 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:05:10
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:05:10.455308   17561 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:05:10.455543   17561 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:05:10.455551   17561 out.go:374] Setting ErrFile to fd 2...
	I1205 06:05:10.455556   17561 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:05:10.455757   17561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12744/.minikube/bin
	I1205 06:05:10.456246   17561 out.go:368] Setting JSON to false
	I1205 06:05:10.457042   17561 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2855,"bootTime":1764911855,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 06:05:10.457093   17561 start.go:143] virtualization: kvm guest
	I1205 06:05:10.458860   17561 out.go:179] * [addons-704432] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 06:05:10.460110   17561 notify.go:221] Checking for updates...
	I1205 06:05:10.460162   17561 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:05:10.461231   17561 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:05:10.462467   17561 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12744/kubeconfig
	I1205 06:05:10.463501   17561 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12744/.minikube
	I1205 06:05:10.464708   17561 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 06:05:10.469229   17561 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:05:10.470508   17561 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:05:10.498931   17561 out.go:179] * Using the kvm2 driver based on user configuration
	I1205 06:05:10.500116   17561 start.go:309] selected driver: kvm2
	I1205 06:05:10.500128   17561 start.go:927] validating driver "kvm2" against <nil>
	I1205 06:05:10.500137   17561 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:05:10.500809   17561 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 06:05:10.501023   17561 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 06:05:10.501047   17561 cni.go:84] Creating CNI manager for ""
	I1205 06:05:10.501084   17561 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 06:05:10.501092   17561 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 06:05:10.501129   17561 start.go:353] cluster config:
	{Name:addons-704432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-704432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1205 06:05:10.501222   17561 iso.go:125] acquiring lock: {Name:mk8940d2199650f8674488213bff178b8d82a626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:05:10.502628   17561 out.go:179] * Starting "addons-704432" primary control-plane node in "addons-704432" cluster
	I1205 06:05:10.503635   17561 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 06:05:10.503666   17561 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-12744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1205 06:05:10.503675   17561 cache.go:65] Caching tarball of preloaded images
	I1205 06:05:10.503760   17561 preload.go:238] Found /home/jenkins/minikube-integration/21997-12744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 06:05:10.503771   17561 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1205 06:05:10.504068   17561 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/config.json ...
	I1205 06:05:10.504089   17561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/config.json: {Name:mkbb1f6db4febbfbd7b6ec26247f0d05f00aad32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:10.504241   17561 start.go:360] acquireMachinesLock for addons-704432: {Name:mk6f885ffa3cca5ad53a733e47a4c8f74f8579b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 06:05:10.504288   17561 start.go:364] duration metric: took 32.644µs to acquireMachinesLock for "addons-704432"
	I1205 06:05:10.504305   17561 start.go:93] Provisioning new machine with config: &{Name:addons-704432 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-704432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 06:05:10.504359   17561 start.go:125] createHost starting for "" (driver="kvm2")
	I1205 06:05:10.506574   17561 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1205 06:05:10.506700   17561 start.go:159] libmachine.API.Create for "addons-704432" (driver="kvm2")
	I1205 06:05:10.506724   17561 client.go:173] LocalClient.Create starting
	I1205 06:05:10.506817   17561 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21997-12744/.minikube/certs/ca.pem
	I1205 06:05:10.605722   17561 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21997-12744/.minikube/certs/cert.pem
	I1205 06:05:10.631586   17561 main.go:143] libmachine: creating domain...
	I1205 06:05:10.631606   17561 main.go:143] libmachine: creating network...
	I1205 06:05:10.633072   17561 main.go:143] libmachine: found existing default network
	I1205 06:05:10.633410   17561 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1205 06:05:10.634029   17561 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cf4c70}
	I1205 06:05:10.634159   17561 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-704432</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1205 06:05:10.639790   17561 main.go:143] libmachine: creating private network mk-addons-704432 192.168.39.0/24...
	I1205 06:05:10.704102   17561 main.go:143] libmachine: private network mk-addons-704432 192.168.39.0/24 created
	I1205 06:05:10.704418   17561 main.go:143] libmachine: <network>
	  <name>mk-addons-704432</name>
	  <uuid>1dce33fc-46df-47b8-b373-076af7f7e323</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:88:2e:dd'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1205 06:05:10.704454   17561 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21997-12744/.minikube/machines/addons-704432 ...
	I1205 06:05:10.704481   17561 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21997-12744/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1205 06:05:10.704493   17561 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21997-12744/.minikube
	I1205 06:05:10.704571   17561 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21997-12744/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21997-12744/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso...
	I1205 06:05:10.954804   17561 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21997-12744/.minikube/machines/addons-704432/id_rsa...
	I1205 06:05:10.985638   17561 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21997-12744/.minikube/machines/addons-704432/addons-704432.rawdisk...
	I1205 06:05:10.985676   17561 main.go:143] libmachine: Writing magic tar header
	I1205 06:05:10.985709   17561 main.go:143] libmachine: Writing SSH key tar header
	I1205 06:05:10.985777   17561 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21997-12744/.minikube/machines/addons-704432 ...
	I1205 06:05:10.985838   17561 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21997-12744/.minikube/machines/addons-704432
	I1205 06:05:10.985864   17561 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21997-12744/.minikube/machines/addons-704432 (perms=drwx------)
	I1205 06:05:10.985876   17561 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21997-12744/.minikube/machines
	I1205 06:05:10.985884   17561 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21997-12744/.minikube/machines (perms=drwxr-xr-x)
	I1205 06:05:10.985897   17561 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21997-12744/.minikube
	I1205 06:05:10.985906   17561 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21997-12744/.minikube (perms=drwxr-xr-x)
	I1205 06:05:10.985915   17561 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21997-12744
	I1205 06:05:10.985923   17561 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21997-12744 (perms=drwxrwxr-x)
	I1205 06:05:10.985931   17561 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1205 06:05:10.985941   17561 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 06:05:10.985950   17561 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1205 06:05:10.985960   17561 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 06:05:10.985971   17561 main.go:143] libmachine: checking permissions on dir: /home
	I1205 06:05:10.985977   17561 main.go:143] libmachine: skipping /home - not owner
	I1205 06:05:10.985981   17561 main.go:143] libmachine: defining domain...
	I1205 06:05:10.987049   17561 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-704432</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21997-12744/.minikube/machines/addons-704432/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21997-12744/.minikube/machines/addons-704432/addons-704432.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-704432'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1205 06:05:10.995207   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:1e:6b:ab in network default
	I1205 06:05:10.995763   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:10.995786   17561 main.go:143] libmachine: starting domain...
	I1205 06:05:10.995791   17561 main.go:143] libmachine: ensuring networks are active...
	I1205 06:05:10.996451   17561 main.go:143] libmachine: Ensuring network default is active
	I1205 06:05:10.996808   17561 main.go:143] libmachine: Ensuring network mk-addons-704432 is active
	I1205 06:05:10.998481   17561 main.go:143] libmachine: getting domain XML...
	I1205 06:05:10.999354   17561 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-704432</name>
	  <uuid>9d1ccf64-3f5a-409b-9d6d-1dc0517842af</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21997-12744/.minikube/machines/addons-704432/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21997-12744/.minikube/machines/addons-704432/addons-704432.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:03:26:6a'/>
	      <source network='mk-addons-704432'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:1e:6b:ab'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1205 06:05:12.254079   17561 main.go:143] libmachine: waiting for domain to start...
	I1205 06:05:12.255285   17561 main.go:143] libmachine: domain is now running
	I1205 06:05:12.255304   17561 main.go:143] libmachine: waiting for IP...
	I1205 06:05:12.256080   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:12.256514   17561 main.go:143] libmachine: no network interface addresses found for domain addons-704432 (source=lease)
	I1205 06:05:12.256531   17561 main.go:143] libmachine: trying to list again with source=arp
	I1205 06:05:12.256794   17561 main.go:143] libmachine: unable to find current IP address of domain addons-704432 in network mk-addons-704432 (interfaces detected: [])
	I1205 06:05:12.256835   17561 retry.go:31] will retry after 302.372188ms: waiting for domain to come up
	I1205 06:05:12.560386   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:12.560935   17561 main.go:143] libmachine: no network interface addresses found for domain addons-704432 (source=lease)
	I1205 06:05:12.560951   17561 main.go:143] libmachine: trying to list again with source=arp
	I1205 06:05:12.561245   17561 main.go:143] libmachine: unable to find current IP address of domain addons-704432 in network mk-addons-704432 (interfaces detected: [])
	I1205 06:05:12.561284   17561 retry.go:31] will retry after 291.227881ms: waiting for domain to come up
	I1205 06:05:12.853593   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:12.854054   17561 main.go:143] libmachine: no network interface addresses found for domain addons-704432 (source=lease)
	I1205 06:05:12.854068   17561 main.go:143] libmachine: trying to list again with source=arp
	I1205 06:05:12.854325   17561 main.go:143] libmachine: unable to find current IP address of domain addons-704432 in network mk-addons-704432 (interfaces detected: [])
	I1205 06:05:12.854354   17561 retry.go:31] will retry after 296.547253ms: waiting for domain to come up
	I1205 06:05:13.152869   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:13.153609   17561 main.go:143] libmachine: no network interface addresses found for domain addons-704432 (source=lease)
	I1205 06:05:13.153629   17561 main.go:143] libmachine: trying to list again with source=arp
	I1205 06:05:13.153960   17561 main.go:143] libmachine: unable to find current IP address of domain addons-704432 in network mk-addons-704432 (interfaces detected: [])
	I1205 06:05:13.153990   17561 retry.go:31] will retry after 581.464729ms: waiting for domain to come up
	I1205 06:05:13.736719   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:13.737176   17561 main.go:143] libmachine: no network interface addresses found for domain addons-704432 (source=lease)
	I1205 06:05:13.737195   17561 main.go:143] libmachine: trying to list again with source=arp
	I1205 06:05:13.737495   17561 main.go:143] libmachine: unable to find current IP address of domain addons-704432 in network mk-addons-704432 (interfaces detected: [])
	I1205 06:05:13.737538   17561 retry.go:31] will retry after 752.23662ms: waiting for domain to come up
	I1205 06:05:14.491096   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:14.491610   17561 main.go:143] libmachine: no network interface addresses found for domain addons-704432 (source=lease)
	I1205 06:05:14.491627   17561 main.go:143] libmachine: trying to list again with source=arp
	I1205 06:05:14.491935   17561 main.go:143] libmachine: unable to find current IP address of domain addons-704432 in network mk-addons-704432 (interfaces detected: [])
	I1205 06:05:14.491964   17561 retry.go:31] will retry after 728.196683ms: waiting for domain to come up
	I1205 06:05:15.221740   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:15.222193   17561 main.go:143] libmachine: no network interface addresses found for domain addons-704432 (source=lease)
	I1205 06:05:15.222206   17561 main.go:143] libmachine: trying to list again with source=arp
	I1205 06:05:15.222475   17561 main.go:143] libmachine: unable to find current IP address of domain addons-704432 in network mk-addons-704432 (interfaces detected: [])
	I1205 06:05:15.222501   17561 retry.go:31] will retry after 1.113533832s: waiting for domain to come up
	I1205 06:05:16.337327   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:16.337751   17561 main.go:143] libmachine: no network interface addresses found for domain addons-704432 (source=lease)
	I1205 06:05:16.337764   17561 main.go:143] libmachine: trying to list again with source=arp
	I1205 06:05:16.338076   17561 main.go:143] libmachine: unable to find current IP address of domain addons-704432 in network mk-addons-704432 (interfaces detected: [])
	I1205 06:05:16.338104   17561 retry.go:31] will retry after 1.430311369s: waiting for domain to come up
	I1205 06:05:17.770735   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:17.771279   17561 main.go:143] libmachine: no network interface addresses found for domain addons-704432 (source=lease)
	I1205 06:05:17.771296   17561 main.go:143] libmachine: trying to list again with source=arp
	I1205 06:05:17.771578   17561 main.go:143] libmachine: unable to find current IP address of domain addons-704432 in network mk-addons-704432 (interfaces detected: [])
	I1205 06:05:17.771613   17561 retry.go:31] will retry after 1.687555708s: waiting for domain to come up
	I1205 06:05:19.461454   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:19.461935   17561 main.go:143] libmachine: no network interface addresses found for domain addons-704432 (source=lease)
	I1205 06:05:19.461949   17561 main.go:143] libmachine: trying to list again with source=arp
	I1205 06:05:19.462213   17561 main.go:143] libmachine: unable to find current IP address of domain addons-704432 in network mk-addons-704432 (interfaces detected: [])
	I1205 06:05:19.462239   17561 retry.go:31] will retry after 2.268086881s: waiting for domain to come up
	I1205 06:05:21.732211   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:21.732809   17561 main.go:143] libmachine: no network interface addresses found for domain addons-704432 (source=lease)
	I1205 06:05:21.732826   17561 main.go:143] libmachine: trying to list again with source=arp
	I1205 06:05:21.733213   17561 main.go:143] libmachine: unable to find current IP address of domain addons-704432 in network mk-addons-704432 (interfaces detected: [])
	I1205 06:05:21.733244   17561 retry.go:31] will retry after 1.812259795s: waiting for domain to come up
	I1205 06:05:23.548650   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:23.549320   17561 main.go:143] libmachine: no network interface addresses found for domain addons-704432 (source=lease)
	I1205 06:05:23.549342   17561 main.go:143] libmachine: trying to list again with source=arp
	I1205 06:05:23.549664   17561 main.go:143] libmachine: unable to find current IP address of domain addons-704432 in network mk-addons-704432 (interfaces detected: [])
	I1205 06:05:23.549722   17561 retry.go:31] will retry after 2.75720621s: waiting for domain to come up
	I1205 06:05:26.308206   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:26.308723   17561 main.go:143] libmachine: domain addons-704432 has current primary IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:26.308736   17561 main.go:143] libmachine: found domain IP: 192.168.39.31
	I1205 06:05:26.308742   17561 main.go:143] libmachine: reserving static IP address...
	I1205 06:05:26.309129   17561 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-704432", mac: "52:54:00:03:26:6a", ip: "192.168.39.31"} in network mk-addons-704432
	I1205 06:05:26.542948   17561 main.go:143] libmachine: reserved static IP address 192.168.39.31 for domain addons-704432
	I1205 06:05:26.542984   17561 main.go:143] libmachine: waiting for SSH...
	I1205 06:05:26.542993   17561 main.go:143] libmachine: Getting to WaitForSSH function...
	I1205 06:05:26.546172   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:26.546602   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:minikube Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:26.546639   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:26.546892   17561 main.go:143] libmachine: Using SSH client type: native
	I1205 06:05:26.547168   17561 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1205 06:05:26.547182   17561 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1205 06:05:26.662577   17561 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:05:26.662929   17561 main.go:143] libmachine: domain creation complete
	I1205 06:05:26.664462   17561 machine.go:94] provisionDockerMachine start ...
	I1205 06:05:26.666821   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:26.667226   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:minikube Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:26.667252   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:26.667400   17561 main.go:143] libmachine: Using SSH client type: native
	I1205 06:05:26.667593   17561 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1205 06:05:26.667603   17561 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:05:26.785301   17561 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 06:05:26.785328   17561 buildroot.go:166] provisioning hostname "addons-704432"
	I1205 06:05:26.788082   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:26.788510   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-704432 Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:26.788532   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:26.788731   17561 main.go:143] libmachine: Using SSH client type: native
	I1205 06:05:26.788973   17561 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1205 06:05:26.788986   17561 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-704432 && echo "addons-704432" | sudo tee /etc/hostname
	I1205 06:05:26.931126   17561 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-704432
	
	I1205 06:05:26.933907   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:26.934311   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-704432 Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:26.934340   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:26.934527   17561 main.go:143] libmachine: Using SSH client type: native
	I1205 06:05:26.934720   17561 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1205 06:05:26.934734   17561 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-704432' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-704432/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-704432' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:05:27.061855   17561 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:05:27.061887   17561 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21997-12744/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-12744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-12744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-12744/.minikube}
	I1205 06:05:27.061946   17561 buildroot.go:174] setting up certificates
	I1205 06:05:27.061961   17561 provision.go:84] configureAuth start
	I1205 06:05:27.064938   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:27.065357   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-704432 Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:27.065378   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:27.068085   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:27.068450   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-704432 Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:27.068468   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:27.068587   17561 provision.go:143] copyHostCerts
	I1205 06:05:27.068655   17561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-12744/.minikube/ca.pem (1078 bytes)
	I1205 06:05:27.068824   17561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-12744/.minikube/cert.pem (1123 bytes)
	I1205 06:05:27.068933   17561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-12744/.minikube/key.pem (1675 bytes)
	I1205 06:05:27.069051   17561 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-12744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-12744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-12744/.minikube/certs/ca-key.pem org=jenkins.addons-704432 san=[127.0.0.1 192.168.39.31 addons-704432 localhost minikube]
	I1205 06:05:27.152977   17561 provision.go:177] copyRemoteCerts
	I1205 06:05:27.153034   17561 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:05:27.155518   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:27.155920   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-704432 Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:27.155941   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:27.156102   17561 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/addons-704432/id_rsa Username:docker}
	I1205 06:05:27.245448   17561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 06:05:27.277441   17561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 06:05:27.307542   17561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 06:05:27.335298   17561 provision.go:87] duration metric: took 273.32643ms to configureAuth
	I1205 06:05:27.335321   17561 buildroot.go:189] setting minikube options for container-runtime
	I1205 06:05:27.335517   17561 config.go:182] Loaded profile config "addons-704432": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:05:27.338217   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:27.338593   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-704432 Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:27.338616   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:27.338794   17561 main.go:143] libmachine: Using SSH client type: native
	I1205 06:05:27.339045   17561 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1205 06:05:27.339061   17561 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 06:05:27.874479   17561 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 06:05:27.874511   17561 machine.go:97] duration metric: took 1.210031139s to provisionDockerMachine
	I1205 06:05:27.874524   17561 client.go:176] duration metric: took 17.367791657s to LocalClient.Create
	I1205 06:05:27.874535   17561 start.go:167] duration metric: took 17.367834624s to libmachine.API.Create "addons-704432"
	I1205 06:05:27.874541   17561 start.go:293] postStartSetup for "addons-704432" (driver="kvm2")
	I1205 06:05:27.874552   17561 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:05:27.874617   17561 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:05:27.876995   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:27.877362   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-704432 Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:27.877381   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:27.877515   17561 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/addons-704432/id_rsa Username:docker}
	I1205 06:05:27.965456   17561 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:05:27.970049   17561 info.go:137] Remote host: Buildroot 2025.02
	I1205 06:05:27.970077   17561 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12744/.minikube/addons for local assets ...
	I1205 06:05:27.970142   17561 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12744/.minikube/files for local assets ...
	I1205 06:05:27.970161   17561 start.go:296] duration metric: took 95.614204ms for postStartSetup
	I1205 06:05:28.022592   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:28.022940   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-704432 Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:28.022973   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:28.023178   17561 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/config.json ...
	I1205 06:05:28.083776   17561 start.go:128] duration metric: took 17.579403441s to createHost
	I1205 06:05:28.086190   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:28.086549   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-704432 Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:28.086577   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:28.086739   17561 main.go:143] libmachine: Using SSH client type: native
	I1205 06:05:28.086946   17561 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1205 06:05:28.086959   17561 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1205 06:05:28.201794   17561 main.go:143] libmachine: SSH cmd err, output: <nil>: 1764914728.162781239
	
	I1205 06:05:28.201815   17561 fix.go:216] guest clock: 1764914728.162781239
	I1205 06:05:28.201822   17561 fix.go:229] Guest: 2025-12-05 06:05:28.162781239 +0000 UTC Remote: 2025-12-05 06:05:28.083802218 +0000 UTC m=+17.673849490 (delta=78.979021ms)
	I1205 06:05:28.201837   17561 fix.go:200] guest clock delta is within tolerance: 78.979021ms
	I1205 06:05:28.201842   17561 start.go:83] releasing machines lock for "addons-704432", held for 17.69754528s
	I1205 06:05:28.204548   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:28.204968   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-704432 Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:28.204992   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:28.205513   17561 ssh_runner.go:195] Run: cat /version.json
	I1205 06:05:28.205602   17561 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 06:05:28.208429   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:28.208740   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:28.208895   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-704432 Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:28.208925   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:28.209108   17561 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/addons-704432/id_rsa Username:docker}
	I1205 06:05:28.209312   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-704432 Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:28.209333   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:28.209501   17561 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/addons-704432/id_rsa Username:docker}
	I1205 06:05:28.291384   17561 ssh_runner.go:195] Run: systemctl --version
	I1205 06:05:28.318227   17561 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 06:05:28.951933   17561 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 06:05:28.958457   17561 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:05:28.958524   17561 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:05:28.977532   17561 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 06:05:28.977560   17561 start.go:496] detecting cgroup driver to use...
	I1205 06:05:28.977637   17561 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 06:05:28.996396   17561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:05:29.012264   17561 docker.go:218] disabling cri-docker service (if available) ...
	I1205 06:05:29.012324   17561 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 06:05:29.028725   17561 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 06:05:29.044238   17561 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 06:05:29.180937   17561 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 06:05:29.385960   17561 docker.go:234] disabling docker service ...
	I1205 06:05:29.386018   17561 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 06:05:29.405967   17561 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 06:05:29.420011   17561 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 06:05:29.570018   17561 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 06:05:29.707870   17561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:05:29.723771   17561 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:05:29.745558   17561 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 06:05:29.745611   17561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:05:29.757218   17561 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 06:05:29.757275   17561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:05:29.768772   17561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:05:29.780515   17561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:05:29.791963   17561 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:05:29.803904   17561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:05:29.815467   17561 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:05:29.835254   17561 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:05:29.846648   17561 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:05:29.856218   17561 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 06:05:29.856267   17561 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 06:05:29.874817   17561 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:05:29.885423   17561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:05:30.027035   17561 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 06:05:30.130443   17561 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 06:05:30.130531   17561 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 06:05:30.135717   17561 start.go:564] Will wait 60s for crictl version
	I1205 06:05:30.135776   17561 ssh_runner.go:195] Run: which crictl
	I1205 06:05:30.139512   17561 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 06:05:30.175076   17561 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 06:05:30.175179   17561 ssh_runner.go:195] Run: crio --version
	I1205 06:05:30.206092   17561 ssh_runner.go:195] Run: crio --version
	I1205 06:05:30.235954   17561 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1205 06:05:30.239724   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:30.240210   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-704432 Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:30.240234   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:30.240455   17561 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 06:05:30.244589   17561 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 06:05:30.258882   17561 kubeadm.go:884] updating cluster {Name:addons-704432 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-704432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:05:30.259002   17561 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 06:05:30.259054   17561 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:05:30.286706   17561 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1205 06:05:30.286794   17561 ssh_runner.go:195] Run: which lz4
	I1205 06:05:30.291021   17561 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 06:05:30.295447   17561 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 06:05:30.295473   17561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1205 06:05:31.409501   17561 crio.go:462] duration metric: took 1.118502684s to copy over tarball
	I1205 06:05:31.409584   17561 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 06:05:32.838832   17561 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.429212379s)
	I1205 06:05:32.838885   17561 crio.go:469] duration metric: took 1.429355472s to extract the tarball
	I1205 06:05:32.838893   17561 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 06:05:32.874284   17561 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:05:32.912129   17561 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 06:05:32.912150   17561 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:05:32.912157   17561 kubeadm.go:935] updating node { 192.168.39.31 8443 v1.34.2 crio true true} ...
	I1205 06:05:32.912249   17561 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-704432 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-704432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:05:32.912320   17561 ssh_runner.go:195] Run: crio config
	I1205 06:05:32.958766   17561 cni.go:84] Creating CNI manager for ""
	I1205 06:05:32.958792   17561 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 06:05:32.958809   17561 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:05:32.958861   17561 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.31 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-704432 NodeName:addons-704432 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:05:32.958970   17561 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-704432"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.31"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.31"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:05:32.959033   17561 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1205 06:05:32.970654   17561 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:05:32.970786   17561 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:05:32.982185   17561 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1205 06:05:33.001546   17561 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 06:05:33.021077   17561 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1205 06:05:33.040483   17561 ssh_runner.go:195] Run: grep 192.168.39.31	control-plane.minikube.internal$ /etc/hosts
	I1205 06:05:33.044411   17561 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.31	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 06:05:33.057882   17561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:05:33.199413   17561 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:05:33.235659   17561 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432 for IP: 192.168.39.31
	I1205 06:05:33.235706   17561 certs.go:195] generating shared ca certs ...
	I1205 06:05:33.235731   17561 certs.go:227] acquiring lock for ca certs: {Name:mk31e04487a5cf4ece02d9725a994239b98a3eba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:33.235925   17561 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-12744/.minikube/ca.key
	I1205 06:05:33.321717   17561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12744/.minikube/ca.crt ...
	I1205 06:05:33.321745   17561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12744/.minikube/ca.crt: {Name:mkc3010a463b035f1849eecfc11c34f4243a94ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:33.321907   17561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12744/.minikube/ca.key ...
	I1205 06:05:33.321920   17561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12744/.minikube/ca.key: {Name:mka2993c6df2cbffa1a9332ea82f28eb0e8af044 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:33.321994   17561 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-12744/.minikube/proxy-client-ca.key
	I1205 06:05:33.420602   17561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12744/.minikube/proxy-client-ca.crt ...
	I1205 06:05:33.420629   17561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12744/.minikube/proxy-client-ca.crt: {Name:mk2063dc6bc2608c35a5ed586f2e32b8bbdb30eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:33.420815   17561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12744/.minikube/proxy-client-ca.key ...
	I1205 06:05:33.420830   17561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12744/.minikube/proxy-client-ca.key: {Name:mkd5def8bec2fe7125b5f2d8a3e7c6df11625faa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:33.420903   17561 certs.go:257] generating profile certs ...
	I1205 06:05:33.420953   17561 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.key
	I1205 06:05:33.420966   17561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt with IP's: []
	I1205 06:05:33.443445   17561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt ...
	I1205 06:05:33.443468   17561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: {Name:mkb4b9fbbba8595fb239134543b4c909edff29d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:33.443604   17561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.key ...
	I1205 06:05:33.443614   17561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.key: {Name:mk25e242125b7aac0fefef7c15f1ac4a33535771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:33.443695   17561 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/apiserver.key.0ee6b6fb
	I1205 06:05:33.443713   17561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/apiserver.crt.0ee6b6fb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.31]
	I1205 06:05:33.497141   17561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/apiserver.crt.0ee6b6fb ...
	I1205 06:05:33.497171   17561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/apiserver.crt.0ee6b6fb: {Name:mkafae054c00e448ebf1c1a77c409b917cf8a2db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:33.497331   17561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/apiserver.key.0ee6b6fb ...
	I1205 06:05:33.497343   17561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/apiserver.key.0ee6b6fb: {Name:mk4ddacb86cce97cc082ab0699872e34fb80f4ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:33.497427   17561 certs.go:382] copying /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/apiserver.crt.0ee6b6fb -> /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/apiserver.crt
	I1205 06:05:33.497497   17561 certs.go:386] copying /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/apiserver.key.0ee6b6fb -> /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/apiserver.key
	I1205 06:05:33.497544   17561 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/proxy-client.key
	I1205 06:05:33.497563   17561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/proxy-client.crt with IP's: []
	I1205 06:05:33.522196   17561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/proxy-client.crt ...
	I1205 06:05:33.522224   17561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/proxy-client.crt: {Name:mkdd8acaf413fc3ce51432cd61fd562c29e654ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:33.522390   17561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/proxy-client.key ...
	I1205 06:05:33.522402   17561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/proxy-client.key: {Name:mk3391cf4349e1399dd9b7cb438beb0f3a92b8d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:33.522577   17561 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12744/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 06:05:33.522616   17561 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12744/.minikube/certs/ca.pem (1078 bytes)
	I1205 06:05:33.522641   17561 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12744/.minikube/certs/cert.pem (1123 bytes)
	I1205 06:05:33.522667   17561 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12744/.minikube/certs/key.pem (1675 bytes)
	I1205 06:05:33.523202   17561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:05:33.553199   17561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:05:33.580438   17561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:05:33.608115   17561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 06:05:33.635296   17561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1205 06:05:33.663180   17561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 06:05:33.691639   17561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:05:33.720168   17561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 06:05:33.747978   17561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:05:33.776328   17561 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:05:33.795708   17561 ssh_runner.go:195] Run: openssl version
	I1205 06:05:33.801975   17561 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:05:33.813058   17561 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:05:33.824677   17561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:05:33.829700   17561 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:05 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:05:33.829766   17561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:05:33.837070   17561 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:05:33.847947   17561 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 06:05:33.858568   17561 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:05:33.863189   17561 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 06:05:33.863247   17561 kubeadm.go:401] StartCluster: {Name:addons-704432 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-704432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:05:33.863355   17561 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:05:33.863401   17561 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:05:33.895467   17561 cri.go:89] found id: ""
	I1205 06:05:33.895555   17561 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:05:33.909549   17561 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 06:05:33.921146   17561 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:05:33.935107   17561 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:05:33.935136   17561 kubeadm.go:158] found existing configuration files:
	
	I1205 06:05:33.935198   17561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 06:05:33.948848   17561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:05:33.948909   17561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:05:33.964303   17561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 06:05:33.975713   17561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:05:33.975783   17561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:05:33.986700   17561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 06:05:33.996769   17561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:05:33.996820   17561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:05:34.007458   17561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 06:05:34.017591   17561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:05:34.017641   17561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:05:34.028002   17561 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 06:05:34.163496   17561 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 06:05:46.069972   17561 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1205 06:05:46.070047   17561 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 06:05:46.070140   17561 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:05:46.070264   17561 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:05:46.070393   17561 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:05:46.070478   17561 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:05:46.071629   17561 out.go:252]   - Generating certificates and keys ...
	I1205 06:05:46.071710   17561 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 06:05:46.071769   17561 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 06:05:46.071830   17561 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 06:05:46.071879   17561 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 06:05:46.071959   17561 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 06:05:46.072010   17561 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 06:05:46.072070   17561 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 06:05:46.072192   17561 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-704432 localhost] and IPs [192.168.39.31 127.0.0.1 ::1]
	I1205 06:05:46.072268   17561 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 06:05:46.072379   17561 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-704432 localhost] and IPs [192.168.39.31 127.0.0.1 ::1]
	I1205 06:05:46.072443   17561 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 06:05:46.072497   17561 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 06:05:46.072536   17561 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 06:05:46.072599   17561 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 06:05:46.072664   17561 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 06:05:46.072759   17561 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 06:05:46.072828   17561 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 06:05:46.072881   17561 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 06:05:46.072949   17561 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 06:05:46.073039   17561 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 06:05:46.073094   17561 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 06:05:46.074493   17561 out.go:252]   - Booting up control plane ...
	I1205 06:05:46.074563   17561 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 06:05:46.074628   17561 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 06:05:46.074703   17561 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 06:05:46.074801   17561 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 06:05:46.074894   17561 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 06:05:46.075008   17561 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 06:05:46.075083   17561 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 06:05:46.075115   17561 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 06:05:46.075227   17561 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 06:05:46.075316   17561 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 06:05:46.075369   17561 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 504.996408ms
	I1205 06:05:46.075443   17561 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1205 06:05:46.075510   17561 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.31:8443/livez
	I1205 06:05:46.075580   17561 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1205 06:05:46.075653   17561 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1205 06:05:46.075731   17561 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.310921071s
	I1205 06:05:46.075792   17561 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.56345308s
	I1205 06:05:46.075860   17561 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501654093s
	I1205 06:05:46.075954   17561 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 06:05:46.076056   17561 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 06:05:46.076103   17561 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 06:05:46.076263   17561 kubeadm.go:319] [mark-control-plane] Marking the node addons-704432 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 06:05:46.076315   17561 kubeadm.go:319] [bootstrap-token] Using token: v8wyqw.9bxq7c0hovc6oc9r
	I1205 06:05:46.077559   17561 out.go:252]   - Configuring RBAC rules ...
	I1205 06:05:46.077702   17561 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 06:05:46.077790   17561 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 06:05:46.077922   17561 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 06:05:46.078036   17561 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 06:05:46.078135   17561 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 06:05:46.078203   17561 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 06:05:46.078293   17561 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 06:05:46.078328   17561 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1205 06:05:46.078381   17561 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1205 06:05:46.078386   17561 kubeadm.go:319] 
	I1205 06:05:46.078432   17561 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1205 06:05:46.078438   17561 kubeadm.go:319] 
	I1205 06:05:46.078498   17561 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1205 06:05:46.078504   17561 kubeadm.go:319] 
	I1205 06:05:46.078528   17561 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1205 06:05:46.078575   17561 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 06:05:46.078617   17561 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 06:05:46.078622   17561 kubeadm.go:319] 
	I1205 06:05:46.078708   17561 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1205 06:05:46.078722   17561 kubeadm.go:319] 
	I1205 06:05:46.078793   17561 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 06:05:46.078802   17561 kubeadm.go:319] 
	I1205 06:05:46.078876   17561 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1205 06:05:46.078945   17561 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 06:05:46.079001   17561 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 06:05:46.079006   17561 kubeadm.go:319] 
	I1205 06:05:46.079089   17561 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 06:05:46.079171   17561 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1205 06:05:46.079177   17561 kubeadm.go:319] 
	I1205 06:05:46.079251   17561 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token v8wyqw.9bxq7c0hovc6oc9r \
	I1205 06:05:46.079339   17561 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2d0ac5ac5e7ca057303e4430ec89e40d74de77786c64de55c276a16d7451ec23 \
	I1205 06:05:46.079363   17561 kubeadm.go:319] 	--control-plane 
	I1205 06:05:46.079371   17561 kubeadm.go:319] 
	I1205 06:05:46.079436   17561 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1205 06:05:46.079442   17561 kubeadm.go:319] 
	I1205 06:05:46.079508   17561 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token v8wyqw.9bxq7c0hovc6oc9r \
	I1205 06:05:46.079635   17561 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2d0ac5ac5e7ca057303e4430ec89e40d74de77786c64de55c276a16d7451ec23 
	I1205 06:05:46.079648   17561 cni.go:84] Creating CNI manager for ""
	I1205 06:05:46.079657   17561 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 06:05:46.081699   17561 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 06:05:46.082958   17561 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 06:05:46.096440   17561 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 06:05:46.118604   17561 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 06:05:46.118763   17561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:46.118775   17561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-704432 minikube.k8s.io/updated_at=2025_12_05T06_05_46_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45 minikube.k8s.io/name=addons-704432 minikube.k8s.io/primary=true
	I1205 06:05:46.161489   17561 ops.go:34] apiserver oom_adj: -16
	I1205 06:05:46.262470   17561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:46.762896   17561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:47.263205   17561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:47.762952   17561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:48.262775   17561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:48.763375   17561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:49.262594   17561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:49.763536   17561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:50.263214   17561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:50.763473   17561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:05:50.859463   17561 kubeadm.go:1114] duration metric: took 4.740758664s to wait for elevateKubeSystemPrivileges
	I1205 06:05:50.859510   17561 kubeadm.go:403] duration metric: took 16.996266282s to StartCluster
	I1205 06:05:50.859531   17561 settings.go:142] acquiring lock: {Name:mk2f276bdecf61f8264687dd612372cc78cfacbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:50.859664   17561 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-12744/kubeconfig
	I1205 06:05:50.860087   17561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12744/kubeconfig: {Name:mka919c4eb7b6e761ae422db15b3daf8c8fde4d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:05:50.860328   17561 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 06:05:50.860365   17561 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 06:05:50.860420   17561 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1205 06:05:50.860528   17561 addons.go:70] Setting yakd=true in profile "addons-704432"
	I1205 06:05:50.860547   17561 addons.go:70] Setting inspektor-gadget=true in profile "addons-704432"
	I1205 06:05:50.860556   17561 config.go:182] Loaded profile config "addons-704432": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:05:50.860566   17561 addons.go:70] Setting metrics-server=true in profile "addons-704432"
	I1205 06:05:50.860580   17561 addons.go:239] Setting addon inspektor-gadget=true in "addons-704432"
	I1205 06:05:50.860586   17561 addons.go:239] Setting addon metrics-server=true in "addons-704432"
	I1205 06:05:50.860559   17561 addons.go:239] Setting addon yakd=true in "addons-704432"
	I1205 06:05:50.860616   17561 host.go:66] Checking if "addons-704432" exists ...
	I1205 06:05:50.860623   17561 addons.go:70] Setting gcp-auth=true in profile "addons-704432"
	I1205 06:05:50.860623   17561 host.go:66] Checking if "addons-704432" exists ...
	I1205 06:05:50.860623   17561 addons.go:70] Setting default-storageclass=true in profile "addons-704432"
	I1205 06:05:50.860647   17561 mustload.go:66] Loading cluster: addons-704432
	I1205 06:05:50.860649   17561 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-704432"
	I1205 06:05:50.860659   17561 addons.go:70] Setting cloud-spanner=true in profile "addons-704432"
	I1205 06:05:50.860677   17561 addons.go:70] Setting storage-provisioner=true in profile "addons-704432"
	I1205 06:05:50.860708   17561 addons.go:239] Setting addon cloud-spanner=true in "addons-704432"
	I1205 06:05:50.860709   17561 addons.go:239] Setting addon storage-provisioner=true in "addons-704432"
	I1205 06:05:50.860737   17561 host.go:66] Checking if "addons-704432" exists ...
	I1205 06:05:50.860839   17561 config.go:182] Loaded profile config "addons-704432": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:05:50.861015   17561 host.go:66] Checking if "addons-704432" exists ...
	I1205 06:05:50.861029   17561 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-704432"
	I1205 06:05:50.861064   17561 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-704432"
	I1205 06:05:50.861085   17561 host.go:66] Checking if "addons-704432" exists ...
	I1205 06:05:50.861355   17561 addons.go:70] Setting ingress=true in profile "addons-704432"
	I1205 06:05:50.861378   17561 addons.go:239] Setting addon ingress=true in "addons-704432"
	I1205 06:05:50.861406   17561 host.go:66] Checking if "addons-704432" exists ...
	I1205 06:05:50.861640   17561 addons.go:70] Setting ingress-dns=true in profile "addons-704432"
	I1205 06:05:50.861662   17561 addons.go:239] Setting addon ingress-dns=true in "addons-704432"
	I1205 06:05:50.861709   17561 host.go:66] Checking if "addons-704432" exists ...
	I1205 06:05:50.860616   17561 host.go:66] Checking if "addons-704432" exists ...
	I1205 06:05:50.862014   17561 addons.go:70] Setting volcano=true in profile "addons-704432"
	I1205 06:05:50.862025   17561 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-704432"
	I1205 06:05:50.862037   17561 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-704432"
	I1205 06:05:50.862045   17561 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-704432"
	I1205 06:05:50.862050   17561 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-704432"
	I1205 06:05:50.862067   17561 host.go:66] Checking if "addons-704432" exists ...
	I1205 06:05:50.862188   17561 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-704432"
	I1205 06:05:50.862207   17561 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-704432"
	I1205 06:05:50.862227   17561 host.go:66] Checking if "addons-704432" exists ...
	I1205 06:05:50.862421   17561 addons.go:70] Setting registry=true in profile "addons-704432"
	I1205 06:05:50.862437   17561 addons.go:239] Setting addon registry=true in "addons-704432"
	I1205 06:05:50.862462   17561 host.go:66] Checking if "addons-704432" exists ...
	I1205 06:05:50.862731   17561 addons.go:70] Setting registry-creds=true in profile "addons-704432"
	I1205 06:05:50.862761   17561 addons.go:239] Setting addon registry-creds=true in "addons-704432"
	I1205 06:05:50.862609   17561 addons.go:70] Setting volumesnapshots=true in profile "addons-704432"
	I1205 06:05:50.862786   17561 host.go:66] Checking if "addons-704432" exists ...
	I1205 06:05:50.862807   17561 addons.go:239] Setting addon volumesnapshots=true in "addons-704432"
	I1205 06:05:50.862836   17561 host.go:66] Checking if "addons-704432" exists ...
	I1205 06:05:50.862032   17561 addons.go:239] Setting addon volcano=true in "addons-704432"
	I1205 06:05:50.863181   17561 host.go:66] Checking if "addons-704432" exists ...
	I1205 06:05:50.863717   17561 out.go:179] * Verifying Kubernetes components...
	I1205 06:05:50.865398   17561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:05:50.868244   17561 host.go:66] Checking if "addons-704432" exists ...
	I1205 06:05:50.868751   17561 addons.go:239] Setting addon default-storageclass=true in "addons-704432"
	I1205 06:05:50.868766   17561 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1205 06:05:50.868781   17561 host.go:66] Checking if "addons-704432" exists ...
	I1205 06:05:50.868898   17561 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1205 06:05:50.869567   17561 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:05:50.870313   17561 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1205 06:05:50.870352   17561 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 06:05:50.870638   17561 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 06:05:50.870875   17561 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1205 06:05:50.870882   17561 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-704432"
	I1205 06:05:50.870933   17561 host.go:66] Checking if "addons-704432" exists ...
	I1205 06:05:50.870883   17561 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1205 06:05:50.871293   17561 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1205 06:05:50.871597   17561 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1205 06:05:50.871607   17561 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1205 06:05:50.871635   17561 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1205 06:05:50.872246   17561 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1205 06:05:50.872451   17561 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1205 06:05:50.872451   17561 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1205 06:05:50.872469   17561 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1205 06:05:50.873587   17561 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 06:05:50.873599   17561 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	W1205 06:05:50.872598   17561 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1205 06:05:50.872568   17561 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:05:50.874021   17561 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1205 06:05:50.874023   17561 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1205 06:05:50.874034   17561 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1205 06:05:50.874027   17561 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 06:05:50.874065   17561 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 06:05:50.874851   17561 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1205 06:05:50.875062   17561 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1205 06:05:50.875101   17561 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1205 06:05:50.875565   17561 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1205 06:05:50.875143   17561 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1205 06:05:50.875650   17561 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1205 06:05:50.876134   17561 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1205 06:05:50.876158   17561 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1205 06:05:50.876137   17561 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1205 06:05:50.876236   17561 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 06:05:50.876249   17561 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1205 06:05:50.876279   17561 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1205 06:05:50.876292   17561 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1205 06:05:50.876927   17561 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1205 06:05:50.876943   17561 out.go:179]   - Using image docker.io/registry:3.0.0
	I1205 06:05:50.877758   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:50.878406   17561 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1205 06:05:50.878424   17561 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1205 06:05:50.878427   17561 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1205 06:05:50.878633   17561 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1205 06:05:50.879034   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:50.879441   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-704432 Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:50.879470   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:50.879827   17561 out.go:179]   - Using image docker.io/busybox:stable
	I1205 06:05:50.879918   17561 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 06:05:50.879992   17561 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1205 06:05:50.880551   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:50.880663   17561 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/addons-704432/id_rsa Username:docker}
	I1205 06:05:50.880832   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-704432 Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:50.880868   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:50.881598   17561 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/addons-704432/id_rsa Username:docker}
	I1205 06:05:50.881736   17561 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1205 06:05:50.881789   17561 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 06:05:50.881807   17561 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1205 06:05:50.883056   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-704432 Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:50.883097   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:50.883830   17561 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/addons-704432/id_rsa Username:docker}
	I1205 06:05:50.883927   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:50.883939   17561 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1205 06:05:50.885101   17561 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1205 06:05:50.885323   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-704432 Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:50.885354   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:50.886112   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:50.886210   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:50.886236   17561 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/addons-704432/id_rsa Username:docker}
	I1205 06:05:50.886989   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:50.887152   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:50.887620   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-704432 Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:50.887651   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:50.887769   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-704432 Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:50.887804   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:50.887916   17561 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1205 06:05:50.888110   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:50.888167   17561 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/addons-704432/id_rsa Username:docker}
	I1205 06:05:50.888478   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:50.888513   17561 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/addons-704432/id_rsa Username:docker}
	I1205 06:05:50.888563   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-704432 Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:50.888590   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:50.888816   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-704432 Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:50.888847   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:50.888892   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:50.889202   17561 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/addons-704432/id_rsa Username:docker}
	I1205 06:05:50.889293   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-704432 Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:50.889327   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:50.889472   17561 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/addons-704432/id_rsa Username:docker}
	I1205 06:05:50.889872   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:50.889878   17561 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/addons-704432/id_rsa Username:docker}
	I1205 06:05:50.889872   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-704432 Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:50.889959   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:50.890005   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-704432 Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:50.890029   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:50.890291   17561 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/addons-704432/id_rsa Username:docker}
	I1205 06:05:50.890348   17561 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/addons-704432/id_rsa Username:docker}
	I1205 06:05:50.890667   17561 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1205 06:05:50.890838   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-704432 Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:50.890876   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:50.890998   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:50.891035   17561 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/addons-704432/id_rsa Username:docker}
	I1205 06:05:50.891577   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-704432 Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:50.891607   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:50.891619   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:50.891869   17561 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/addons-704432/id_rsa Username:docker}
	I1205 06:05:50.892061   17561 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1205 06:05:50.892076   17561 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1205 06:05:50.892226   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-704432 Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:50.892259   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:50.892455   17561 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/addons-704432/id_rsa Username:docker}
	I1205 06:05:50.894935   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:50.895366   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-704432 Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:50.895389   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:50.895543   17561 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/addons-704432/id_rsa Username:docker}
	W1205 06:05:51.122784   17561 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44432->192.168.39.31:22: read: connection reset by peer
	I1205 06:05:51.122813   17561 retry.go:31] will retry after 321.601327ms: ssh: handshake failed: read tcp 192.168.39.1:44432->192.168.39.31:22: read: connection reset by peer
	W1205 06:05:51.151003   17561 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44448->192.168.39.31:22: read: connection reset by peer
	I1205 06:05:51.151047   17561 retry.go:31] will retry after 328.931068ms: ssh: handshake failed: read tcp 192.168.39.1:44448->192.168.39.31:22: read: connection reset by peer
	W1205 06:05:51.151130   17561 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44456->192.168.39.31:22: read: connection reset by peer
	I1205 06:05:51.151147   17561 retry.go:31] will retry after 273.037535ms: ssh: handshake failed: read tcp 192.168.39.1:44456->192.168.39.31:22: read: connection reset by peer
	I1205 06:05:51.400990   17561 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:05:51.401526   17561 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 06:05:51.608168   17561 node_ready.go:35] waiting up to 6m0s for node "addons-704432" to be "Ready" ...
	I1205 06:05:51.613874   17561 node_ready.go:49] node "addons-704432" is "Ready"
	I1205 06:05:51.613910   17561 node_ready.go:38] duration metric: took 5.707892ms for node "addons-704432" to be "Ready" ...
	I1205 06:05:51.613922   17561 api_server.go:52] waiting for apiserver process to appear ...
	I1205 06:05:51.613962   17561 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:05:52.020532   17561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1205 06:05:52.022388   17561 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1205 06:05:52.022411   17561 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1205 06:05:52.025301   17561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:05:52.034067   17561 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1205 06:05:52.034090   17561 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1205 06:05:52.054079   17561 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1205 06:05:52.054120   17561 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1205 06:05:52.111764   17561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1205 06:05:52.152635   17561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1205 06:05:52.187513   17561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 06:05:52.241552   17561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1205 06:05:52.241854   17561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:05:52.248815   17561 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1205 06:05:52.248839   17561 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1205 06:05:52.322199   17561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 06:05:52.324431   17561 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 06:05:52.324456   17561 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1205 06:05:52.559853   17561 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1205 06:05:52.559885   17561 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1205 06:05:52.560830   17561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 06:05:52.582808   17561 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1205 06:05:52.582828   17561 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1205 06:05:52.585730   17561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 06:05:52.706405   17561 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1205 06:05:52.706436   17561 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1205 06:05:52.770164   17561 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 06:05:52.770199   17561 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 06:05:52.775882   17561 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1205 06:05:52.775903   17561 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1205 06:05:52.831667   17561 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1205 06:05:52.831714   17561 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1205 06:05:52.851265   17561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1205 06:05:52.998178   17561 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1205 06:05:52.998213   17561 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1205 06:05:53.021357   17561 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1205 06:05:53.021388   17561 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1205 06:05:53.069355   17561 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 06:05:53.069385   17561 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 06:05:53.101146   17561 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1205 06:05:53.101179   17561 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1205 06:05:53.300831   17561 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 06:05:53.300861   17561 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1205 06:05:53.342167   17561 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1205 06:05:53.342198   17561 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1205 06:05:53.360662   17561 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1205 06:05:53.360715   17561 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1205 06:05:53.389269   17561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 06:05:53.587121   17561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1205 06:05:53.631715   17561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 06:05:53.738784   17561 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1205 06:05:53.738813   17561 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1205 06:05:54.137110   17561 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1205 06:05:54.137131   17561 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1205 06:05:54.212702   17561 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.811130609s)
	I1205 06:05:54.212736   17561 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1205 06:05:54.212768   17561 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.598786915s)
	I1205 06:05:54.212788   17561 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.192228091s)
	I1205 06:05:54.212806   17561 api_server.go:72] duration metric: took 3.352404516s to wait for apiserver process to appear ...
	I1205 06:05:54.212816   17561 api_server.go:88] waiting for apiserver healthz status ...
	I1205 06:05:54.212825   17561 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.187498607s)
	I1205 06:05:54.212845   17561 api_server.go:253] Checking apiserver healthz at https://192.168.39.31:8443/healthz ...
	I1205 06:05:54.233314   17561 api_server.go:279] https://192.168.39.31:8443/healthz returned 200:
	ok
	I1205 06:05:54.242758   17561 api_server.go:141] control plane version: v1.34.2
	I1205 06:05:54.242783   17561 api_server.go:131] duration metric: took 29.960759ms to wait for apiserver health ...
	I1205 06:05:54.242791   17561 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 06:05:54.293078   17561 system_pods.go:59] 10 kube-system pods found
	I1205 06:05:54.293134   17561 system_pods.go:61] "amd-gpu-device-plugin-h9dbz" [e3ad30c3-9f62-45e0-b778-e81884060dcb] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1205 06:05:54.293140   17561 system_pods.go:61] "coredns-66bc5c9577-92z7v" [6a23a376-d669-4854-be9a-0a3835a097bb] Running
	I1205 06:05:54.293147   17561 system_pods.go:61] "coredns-66bc5c9577-vnvp7" [56f0252c-fe5f-408e-9399-ecc391efa51d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 06:05:54.293160   17561 system_pods.go:61] "etcd-addons-704432" [e648b4b2-447b-4dec-bdc5-8ac5ef3627d0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 06:05:54.293168   17561 system_pods.go:61] "kube-apiserver-addons-704432" [9fc9c87b-6a21-40ac-b042-9e96ddeeded3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 06:05:54.293177   17561 system_pods.go:61] "kube-controller-manager-addons-704432" [f3cc65e9-8239-4979-a2e8-d5d50b8b1944] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 06:05:54.293183   17561 system_pods.go:61] "kube-proxy-fjwcj" [aefe502d-6848-492f-b3f3-63135d161647] Running
	I1205 06:05:54.293200   17561 system_pods.go:61] "kube-scheduler-addons-704432" [1c216dfe-c4f2-417a-affc-0dca950dc90b] Running
	I1205 06:05:54.293205   17561 system_pods.go:61] "nvidia-device-plugin-daemonset-7bpgl" [e13f38d1-2164-4e32-9c93-9921cb031513] Pending
	I1205 06:05:54.293216   17561 system_pods.go:61] "registry-creds-764b6fb674-4wpj4" [6ad9f9a8-315e-4b5f-a50e-7d8b75ecc66b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1205 06:05:54.293226   17561 system_pods.go:74] duration metric: took 50.429496ms to wait for pod list to return data ...
	I1205 06:05:54.293236   17561 default_sa.go:34] waiting for default service account to be created ...
	I1205 06:05:54.316704   17561 default_sa.go:45] found service account: "default"
	I1205 06:05:54.316729   17561 default_sa.go:55] duration metric: took 23.48743ms for default service account to be created ...
	I1205 06:05:54.316738   17561 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 06:05:54.345516   17561 system_pods.go:86] 10 kube-system pods found
	I1205 06:05:54.345544   17561 system_pods.go:89] "amd-gpu-device-plugin-h9dbz" [e3ad30c3-9f62-45e0-b778-e81884060dcb] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1205 06:05:54.345549   17561 system_pods.go:89] "coredns-66bc5c9577-92z7v" [6a23a376-d669-4854-be9a-0a3835a097bb] Running
	I1205 06:05:54.345556   17561 system_pods.go:89] "coredns-66bc5c9577-vnvp7" [56f0252c-fe5f-408e-9399-ecc391efa51d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 06:05:54.345561   17561 system_pods.go:89] "etcd-addons-704432" [e648b4b2-447b-4dec-bdc5-8ac5ef3627d0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 06:05:54.345576   17561 system_pods.go:89] "kube-apiserver-addons-704432" [9fc9c87b-6a21-40ac-b042-9e96ddeeded3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 06:05:54.345582   17561 system_pods.go:89] "kube-controller-manager-addons-704432" [f3cc65e9-8239-4979-a2e8-d5d50b8b1944] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 06:05:54.345588   17561 system_pods.go:89] "kube-proxy-fjwcj" [aefe502d-6848-492f-b3f3-63135d161647] Running
	I1205 06:05:54.345593   17561 system_pods.go:89] "kube-scheduler-addons-704432" [1c216dfe-c4f2-417a-affc-0dca950dc90b] Running
	I1205 06:05:54.345600   17561 system_pods.go:89] "nvidia-device-plugin-daemonset-7bpgl" [e13f38d1-2164-4e32-9c93-9921cb031513] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1205 06:05:54.345608   17561 system_pods.go:89] "registry-creds-764b6fb674-4wpj4" [6ad9f9a8-315e-4b5f-a50e-7d8b75ecc66b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1205 06:05:54.345618   17561 system_pods.go:126] duration metric: took 28.874211ms to wait for k8s-apps to be running ...
	I1205 06:05:54.345630   17561 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 06:05:54.345676   17561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:05:54.590033   17561 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1205 06:05:54.590064   17561 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1205 06:05:54.721135   17561 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-704432" context rescaled to 1 replicas
	I1205 06:05:54.986244   17561 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1205 06:05:54.986263   17561 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1205 06:05:55.121555   17561 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.968883512s)
	I1205 06:05:55.121732   17561 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.009939482s)
	I1205 06:05:55.332665   17561 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1205 06:05:55.332699   17561 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1205 06:05:55.727058   17561 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 06:05:55.727091   17561 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1205 06:05:56.168376   17561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 06:05:56.909744   17561 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.722190512s)
	I1205 06:05:58.122782   17561 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (5.881182135s)
	I1205 06:05:58.122874   17561 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.880986992s)
	I1205 06:05:58.122927   17561 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.80068876s)
	I1205 06:05:58.122970   17561 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.562115988s)
	I1205 06:05:58.278760   17561 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1205 06:05:58.281546   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:58.281973   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-704432 Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:58.281996   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:58.282166   17561 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/addons-704432/id_rsa Username:docker}
	I1205 06:05:58.610436   17561 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1205 06:05:58.722837   17561 addons.go:239] Setting addon gcp-auth=true in "addons-704432"
	I1205 06:05:58.722912   17561 host.go:66] Checking if "addons-704432" exists ...
	I1205 06:05:58.724880   17561 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1205 06:05:58.727591   17561 main.go:143] libmachine: domain addons-704432 has defined MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:58.728152   17561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:26:6a", ip: ""} in network mk-addons-704432: {Iface:virbr1 ExpiryTime:2025-12-05 07:05:25 +0000 UTC Type:0 Mac:52:54:00:03:26:6a Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:addons-704432 Clientid:01:52:54:00:03:26:6a}
	I1205 06:05:58.728193   17561 main.go:143] libmachine: domain addons-704432 has defined IP address 192.168.39.31 and MAC address 52:54:00:03:26:6a in network mk-addons-704432
	I1205 06:05:58.728364   17561 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/addons-704432/id_rsa Username:docker}
	I1205 06:06:00.017726   17561 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.166416175s)
	I1205 06:06:00.017779   17561 addons.go:495] Verifying addon registry=true in "addons-704432"
	I1205 06:06:00.017831   17561 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.628524361s)
	I1205 06:06:00.017911   17561 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.430755465s)
	I1205 06:06:00.017866   17561 addons.go:495] Verifying addon metrics-server=true in "addons-704432"
	I1205 06:06:00.018031   17561 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.386267608s)
	W1205 06:06:00.018072   17561 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 06:06:00.018101   17561 retry.go:31] will retry after 313.446854ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 06:06:00.018072   17561 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (5.672358843s)
	I1205 06:06:00.018142   17561 system_svc.go:56] duration metric: took 5.672508858s WaitForService to wait for kubelet
	I1205 06:06:00.018154   17561 kubeadm.go:587] duration metric: took 9.157754104s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 06:06:00.018178   17561 node_conditions.go:102] verifying NodePressure condition ...
	I1205 06:06:00.019379   17561 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.433625371s)
	I1205 06:06:00.019396   17561 addons.go:495] Verifying addon ingress=true in "addons-704432"
	I1205 06:06:00.019728   17561 out.go:179] * Verifying registry addon...
	I1205 06:06:00.019734   17561 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-704432 service yakd-dashboard -n yakd-dashboard
	
	I1205 06:06:00.020765   17561 out.go:179] * Verifying ingress addon...
	I1205 06:06:00.021366   17561 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1205 06:06:00.022548   17561 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1205 06:06:00.047495   17561 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 06:06:00.047521   17561 node_conditions.go:123] node cpu capacity is 2
	I1205 06:06:00.047539   17561 node_conditions.go:105] duration metric: took 29.35399ms to run NodePressure ...
	I1205 06:06:00.047553   17561 start.go:242] waiting for startup goroutines ...
	I1205 06:06:00.057096   17561 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 06:06:00.057114   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:00.057346   17561 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1205 06:06:00.057370   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:00.332127   17561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 06:06:00.542763   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:00.543048   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:01.047455   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:01.047491   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:01.209968   17561 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.041540022s)
	I1205 06:06:01.210017   17561 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-704432"
	I1205 06:06:01.210030   17561 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.485119297s)
	I1205 06:06:01.211487   17561 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1205 06:06:01.211504   17561 out.go:179] * Verifying csi-hostpath-driver addon...
	I1205 06:06:01.212993   17561 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1205 06:06:01.213863   17561 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1205 06:06:01.213953   17561 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1205 06:06:01.213970   17561 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1205 06:06:01.224556   17561 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 06:06:01.224578   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:01.354606   17561 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1205 06:06:01.354636   17561 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1205 06:06:01.477491   17561 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 06:06:01.477518   17561 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1205 06:06:01.539396   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:01.542490   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:01.581042   17561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 06:06:01.719320   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:02.027718   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:02.028098   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:02.222830   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:02.529149   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:02.531677   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:02.569272   17561 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.237096222s)
	I1205 06:06:02.731647   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:03.089930   17561 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.508853583s)
	I1205 06:06:03.090932   17561 addons.go:495] Verifying addon gcp-auth=true in "addons-704432"
	I1205 06:06:03.092704   17561 out.go:179] * Verifying gcp-auth addon...
	I1205 06:06:03.095135   17561 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1205 06:06:03.095521   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:03.115902   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:03.123029   17561 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1205 06:06:03.123055   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:03.223441   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:03.528819   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:03.532067   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:03.601159   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:03.723586   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:04.029666   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:04.030083   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:04.101950   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:04.220448   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:04.528791   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:04.529064   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:04.600211   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:04.719494   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:05.028058   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:05.030077   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:05.127868   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:05.217775   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:05.527628   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:05.527863   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:05.599079   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:05.720171   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:06.027642   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:06.028236   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:06.099123   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:06.219139   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:06.525612   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:06.528267   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:06.599175   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:06.717857   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:07.026624   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:07.027048   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:07.126271   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:07.227298   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:07.525868   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:07.525868   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:07.599584   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:07.718838   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:08.024763   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:08.026049   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:08.099266   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:08.219255   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:08.528915   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:08.533302   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:08.597941   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:08.717789   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:09.025947   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:09.026539   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:09.102052   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:09.219433   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:09.527545   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:09.527820   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:09.599111   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:09.717086   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:10.027558   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:10.028839   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:10.099734   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:10.219323   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:10.529534   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:10.530515   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:10.598581   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:10.718881   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:11.148928   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:11.151919   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:11.153607   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:11.252213   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:11.525935   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:11.527075   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:11.599937   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:11.717827   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:12.026163   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:12.026266   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:12.103138   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:12.218303   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:12.529438   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:12.529789   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:12.626920   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:12.718715   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:13.025735   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:13.025845   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:13.100354   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:13.220129   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:13.525584   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:13.526657   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:13.598616   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:13.718077   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:14.026198   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:14.026374   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:14.098478   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:14.218900   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:14.530036   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:14.530474   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:14.600491   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:14.721792   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:15.026554   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:15.027711   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:15.098882   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:15.225145   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:15.527829   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:15.529811   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:15.600007   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:15.719207   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:16.026090   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:16.027119   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:16.098961   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:16.221486   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:16.525420   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:16.525734   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:16.599609   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:16.717752   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:17.025085   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:17.027023   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:17.099081   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:17.218732   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:17.528272   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:17.531625   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:17.600987   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:17.717836   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:18.027569   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:18.032330   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:18.099181   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:18.222631   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:18.527798   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:18.530531   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:18.601313   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:18.718255   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:19.030796   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:19.033981   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:19.102011   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:19.226757   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:19.526912   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:19.532041   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:19.602566   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:19.737840   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:20.029293   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:20.029569   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:20.100347   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:20.217933   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:20.525763   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:20.527419   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:20.599281   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:20.719768   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:21.032089   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:21.032282   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:21.097999   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:21.220313   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:21.531577   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:21.531770   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:21.600367   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:21.717817   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:22.025746   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:22.037441   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:22.098046   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:22.221176   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:22.528430   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:22.531973   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:22.600790   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:22.717913   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:23.025210   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:23.027402   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:23.098505   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:23.249050   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:23.529039   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:23.532138   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:23.628542   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:23.719969   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:24.026763   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:24.029338   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:24.098534   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:24.222207   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:24.531304   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:24.531749   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:24.600868   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:24.721722   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:25.026615   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:25.028293   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:25.098603   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:25.220241   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:25.529889   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:25.530029   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:25.600562   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:25.720756   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:26.027614   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:26.027981   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:26.098803   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:26.219411   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:26.527717   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:26.527800   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:26.599874   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:26.717819   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:27.025981   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:27.027574   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:27.098720   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:27.218861   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:27.534094   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:27.534320   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:27.684673   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:27.718832   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:28.028944   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:28.029248   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:28.103819   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:28.223955   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:28.528149   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:28.528657   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:28.599064   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:28.717952   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:29.027559   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:29.029313   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:29.099314   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:29.219423   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:29.524253   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:29.527715   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:29.600480   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:29.718439   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:30.024366   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:30.025903   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:30.099678   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:30.218356   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:30.524656   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:06:30.526695   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:30.600558   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:30.718850   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:31.026789   17561 kapi.go:107] duration metric: took 31.005420511s to wait for kubernetes.io/minikube-addons=registry ...
	I1205 06:06:31.028011   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:31.099146   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:31.219281   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:31.528301   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:31.600525   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:31.729957   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:32.026480   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:32.102108   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:32.221113   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:32.527259   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:32.597960   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:32.721304   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:33.027609   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:33.099396   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:33.218016   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:33.533287   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:33.598353   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:33.719223   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:34.027929   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:34.099294   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:34.221994   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:34.526341   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:34.598594   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:34.718326   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:35.027261   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:35.098497   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:35.219583   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:35.526450   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:35.599103   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:35.717840   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:36.027359   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:36.099135   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:36.219843   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:36.528255   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:36.600570   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:36.718425   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:37.027477   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:37.098359   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:37.218583   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:37.528116   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:37.599250   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:37.724040   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:38.072464   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:38.170782   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:38.218589   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:38.526574   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:38.598371   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:38.720605   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:39.027773   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:39.102985   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:39.219330   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:39.526368   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:39.598119   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:39.719325   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:40.027670   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:40.127202   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:40.221511   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:40.529311   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:40.598930   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:40.717496   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:41.029542   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:41.098280   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:41.218010   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:41.526285   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:41.598877   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:41.717268   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:42.026714   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:42.098885   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:42.218061   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:42.528128   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:42.599326   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:42.719046   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:43.026636   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:43.099761   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:43.218785   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:43.525808   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:43.599376   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:43.717933   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:44.026203   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:44.100122   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:44.219274   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:44.527966   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:44.599286   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:44.718135   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:45.026774   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:45.098635   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:45.219742   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:45.526402   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:45.598291   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:45.802231   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:46.026828   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:46.098590   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:46.218491   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:46.525700   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:46.598202   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:46.717709   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:47.027022   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:47.099454   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:47.217778   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:47.526836   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:47.598734   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:47.717884   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:48.027462   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:48.098275   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:48.218461   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:48.526078   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:48.599153   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:48.718326   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:49.026220   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:49.099259   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:49.218512   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:49.525935   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:49.598526   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:49.718109   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:50.026808   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:50.098982   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:50.218176   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:50.526936   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:50.599331   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:50.717668   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:51.026187   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:51.099247   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:51.218040   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:51.526995   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:51.598972   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:51.717112   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:52.026838   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:52.098489   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:52.406974   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:52.526085   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:52.599355   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:52.718005   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:53.026615   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:53.098628   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:53.218509   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:53.526476   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:53.598535   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:53.718055   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:54.027767   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:54.098285   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:54.217564   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:54.527756   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:54.599079   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:54.718117   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:55.027025   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:55.098668   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:55.217946   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:55.526650   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:55.599152   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:55.717939   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:56.026438   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:56.098398   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:56.218371   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:56.528708   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:56.598681   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:56.718852   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:57.026414   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:57.098435   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:57.218701   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:57.526283   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:57.598268   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:57.717481   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:58.026559   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:58.098705   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:58.218304   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:58.527380   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:58.598376   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:58.717635   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:59.026677   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:59.098637   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:59.218075   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:06:59.526809   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:06:59.598393   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:06:59.717674   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:00.025922   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:00.099700   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:00.218342   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:00.527988   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:00.601577   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:00.717655   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:01.026255   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:01.098292   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:01.217119   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:01.527001   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:01.598772   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:01.718093   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:02.026923   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:02.098829   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:02.217934   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:02.527227   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:02.599909   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:02.717041   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:03.026883   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:03.099150   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:03.217165   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:03.526980   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:03.599146   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:03.717738   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:04.026621   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:04.127163   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:04.218162   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:04.527309   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:04.598881   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:04.716889   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:05.027127   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:05.099355   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:05.218736   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:05.526713   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:05.599135   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:05.718653   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:06.026218   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:06.099219   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:06.217325   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:06.527295   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:06.597894   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:06.718308   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:07.029270   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:07.098620   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:07.218433   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:07.528109   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:07.599247   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:07.718098   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:08.026426   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:08.098521   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:08.217751   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:08.527024   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:08.598878   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:08.717254   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:09.026729   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:09.099093   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:09.217274   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:09.526449   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:09.598296   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:09.717905   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:10.026289   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:10.098326   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:10.218439   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:10.528732   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:10.598971   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:10.717413   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:11.026214   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:11.098228   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:11.217438   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:11.526235   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:11.598997   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:11.717186   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:12.026944   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:12.099387   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:12.218171   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:12.526823   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:12.599481   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:12.718366   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:13.026753   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:13.098491   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:13.218097   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:13.526909   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:13.599456   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:13.718773   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:14.026678   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:14.098421   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:14.218650   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:14.526904   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:14.598988   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:14.717664   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:15.025781   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:15.098671   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:15.218408   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:15.527818   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:15.598313   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:15.717986   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:16.026581   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:16.098520   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:16.218355   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:16.527701   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:16.598191   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:16.718070   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:17.026378   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:17.098801   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:17.217765   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:17.527089   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:17.600095   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:17.717095   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:18.026482   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:18.098515   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:18.219141   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:18.526829   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:18.599809   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:18.718488   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:19.025978   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:19.099249   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:19.217528   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:19.526149   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:19.598824   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:19.717845   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:20.026106   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:20.098783   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:20.218363   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:20.526235   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:20.626146   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:20.717656   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:21.026328   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:21.098186   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:21.217593   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:21.526147   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:21.599458   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:21.718912   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:22.027105   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:22.099080   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:22.217727   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:22.526329   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:22.599815   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:22.719085   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:23.026436   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:23.098235   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:23.217506   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:23.525990   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:23.599141   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:23.717835   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:24.026776   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:24.098149   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:24.220704   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:24.528004   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:24.629920   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:24.717096   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:25.030410   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:25.098837   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:25.220822   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:25.530412   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:25.630432   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:25.719855   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:26.029920   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:26.110967   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:26.222185   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:26.536913   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:26.602026   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:26.718492   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:27.027489   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:27.099234   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:27.217674   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:27.527570   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:27.599040   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:27.719002   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:28.031758   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:28.099913   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:28.222220   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:28.529088   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:28.598820   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:28.720989   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:29.027639   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:29.128583   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:29.229637   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:29.527702   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:29.598880   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:29.719545   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:30.029882   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:30.102277   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:30.219500   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:30.527084   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:30.628483   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:30.730035   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:31.030881   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:31.101922   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:31.218924   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:31.526530   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:31.599634   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:31.719266   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:32.026838   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:32.115894   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:32.218737   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:32.526741   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:32.598778   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:32.721942   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:33.027076   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:33.110197   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:33.232621   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:33.529575   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:33.599264   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:33.718404   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:34.029746   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:34.114910   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:34.219652   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:34.526805   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:34.599954   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:34.720474   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:35.027762   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:35.099314   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:35.217599   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:35.527908   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:35.602122   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:35.718048   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:36.031012   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:36.101572   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:36.220342   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:36.587264   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:36.601140   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:36.720512   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:37.027009   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:37.127454   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:37.219399   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:37.529294   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:37.630284   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:37.727765   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:38.026956   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:38.127809   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:38.218415   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:38.531171   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:38.601266   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:38.720786   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:39.027610   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:39.100792   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:39.221500   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:39.527735   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:39.599701   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:39.723675   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:40.027344   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:40.098727   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:40.218327   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:40.530668   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:40.603371   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:40.717759   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:41.028884   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:41.129796   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:41.218451   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:07:41.527752   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:41.599320   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:41.717522   17561 kapi.go:107] duration metric: took 1m40.503655443s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1205 06:07:42.025938   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:42.099296   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:42.526612   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:42.599034   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:43.027005   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:43.098985   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:43.526618   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:43.598720   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:44.026695   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:44.099294   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:44.526741   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:44.598525   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:45.026731   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:45.099155   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:45.535526   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:45.598548   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:46.026705   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:46.098614   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:46.526629   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:46.598525   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:47.027144   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:47.099052   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:47.526718   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:47.598368   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:48.026465   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:48.098397   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:48.525910   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:48.598742   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:49.029468   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:49.098525   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:49.526325   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:49.598525   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:50.026880   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:50.098783   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:50.526556   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:50.598498   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:51.027131   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:51.099795   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:51.526148   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:51.599738   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:52.026864   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:52.098819   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:52.527287   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:52.599568   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:53.027164   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:53.099324   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:53.525746   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:53.598590   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:54.026668   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:54.099193   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:54.526790   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:54.598996   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:55.027510   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:55.099026   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:55.528473   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:55.598861   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:56.027351   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:56.098698   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:56.527042   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:56.598978   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:57.027120   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:57.098950   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:57.527735   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:57.598703   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:58.026590   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:58.098715   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:58.526164   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:58.599974   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:59.027018   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:59.098744   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:07:59.526727   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:07:59.598812   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:00.027133   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:00.099169   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:00.525802   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:00.598823   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:01.027731   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:01.099332   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:01.526134   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:01.599562   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:02.026134   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:02.099380   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:02.529186   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:02.599236   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:03.027518   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:03.099411   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:03.526384   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:03.598496   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:04.026126   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:04.098951   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:04.527331   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:04.597895   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:05.027829   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:05.099143   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:05.526831   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:05.599413   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:06.026558   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:06.098362   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:06.526447   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:06.599035   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:07.028397   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:07.128793   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:07.527800   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:07.599383   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:08.027060   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:08.100535   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:08.526379   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:08.598354   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:09.026895   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:09.099318   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:09.526275   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:09.598502   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:10.026055   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:10.099255   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:10.525948   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:10.598735   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:11.027789   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:11.099340   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:11.525889   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:11.598625   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:12.026209   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:12.099794   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:12.529156   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:12.599025   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:13.027404   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:13.098659   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:13.528847   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:13.600612   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:14.028953   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:14.099406   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:14.529883   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:14.599492   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:15.028323   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:15.100026   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:15.530584   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:15.600193   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:16.029023   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:16.100343   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:16.525875   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:16.600715   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:17.026987   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:17.099066   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:17.526770   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:17.603893   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:18.029001   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:18.099392   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:18.526952   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:18.600638   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:19.113744   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:19.113742   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:19.527305   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:19.599449   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:20.029801   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:20.100306   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:20.526234   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:20.626377   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:21.027011   17561 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:08:21.126855   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:21.526774   17561 kapi.go:107] duration metric: took 2m21.504219517s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1205 06:08:21.598446   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:22.104713   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:22.603200   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:23.103556   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:23.602112   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:24.101252   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:24.599264   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:25.099709   17561 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:08:25.599851   17561 kapi.go:107] duration metric: took 2m22.504716238s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1205 06:08:25.601609   17561 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-704432 cluster.
	I1205 06:08:25.602801   17561 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1205 06:08:25.603991   17561 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1205 06:08:25.605238   17561 out.go:179] * Enabled addons: amd-gpu-device-plugin, default-storageclass, registry-creds, cloud-spanner, ingress-dns, inspektor-gadget, storage-provisioner, nvidia-device-plugin, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1205 06:08:25.606464   17561 addons.go:530] duration metric: took 2m34.746045073s for enable addons: enabled=[amd-gpu-device-plugin default-storageclass registry-creds cloud-spanner ingress-dns inspektor-gadget storage-provisioner nvidia-device-plugin storage-provisioner-rancher metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1205 06:08:25.606504   17561 start.go:247] waiting for cluster config update ...
	I1205 06:08:25.606530   17561 start.go:256] writing updated cluster config ...
	I1205 06:08:25.606789   17561 ssh_runner.go:195] Run: rm -f paused
	I1205 06:08:25.613920   17561 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 06:08:25.617087   17561 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-92z7v" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:08:25.623005   17561 pod_ready.go:94] pod "coredns-66bc5c9577-92z7v" is "Ready"
	I1205 06:08:25.623029   17561 pod_ready.go:86] duration metric: took 5.917523ms for pod "coredns-66bc5c9577-92z7v" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:08:25.625093   17561 pod_ready.go:83] waiting for pod "etcd-addons-704432" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:08:25.630767   17561 pod_ready.go:94] pod "etcd-addons-704432" is "Ready"
	I1205 06:08:25.630789   17561 pod_ready.go:86] duration metric: took 5.67795ms for pod "etcd-addons-704432" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:08:25.632886   17561 pod_ready.go:83] waiting for pod "kube-apiserver-addons-704432" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:08:25.636724   17561 pod_ready.go:94] pod "kube-apiserver-addons-704432" is "Ready"
	I1205 06:08:25.636744   17561 pod_ready.go:86] duration metric: took 3.837435ms for pod "kube-apiserver-addons-704432" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:08:25.639315   17561 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-704432" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:08:26.018188   17561 pod_ready.go:94] pod "kube-controller-manager-addons-704432" is "Ready"
	I1205 06:08:26.018213   17561 pod_ready.go:86] duration metric: took 378.883028ms for pod "kube-controller-manager-addons-704432" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:08:26.220552   17561 pod_ready.go:83] waiting for pod "kube-proxy-fjwcj" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:08:26.618247   17561 pod_ready.go:94] pod "kube-proxy-fjwcj" is "Ready"
	I1205 06:08:26.618287   17561 pod_ready.go:86] duration metric: took 397.711116ms for pod "kube-proxy-fjwcj" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:08:26.818707   17561 pod_ready.go:83] waiting for pod "kube-scheduler-addons-704432" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:08:27.218063   17561 pod_ready.go:94] pod "kube-scheduler-addons-704432" is "Ready"
	I1205 06:08:27.218104   17561 pod_ready.go:86] duration metric: took 399.369683ms for pod "kube-scheduler-addons-704432" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:08:27.218130   17561 pod_ready.go:40] duration metric: took 1.604172009s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 06:08:27.266990   17561 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1205 06:08:27.268592   17561 out.go:179] * Done! kubectl is now configured to use "addons-704432" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 06:11:37 addons-704432 crio[810]: time="2025-12-05 06:11:37.753021215Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36afc4dd-b286-4dfc-bc02-756a3b8d3917 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 06:11:37 addons-704432 crio[810]: time="2025-12-05 06:11:37.753701036Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6899c887817a87cafb509aef59eb8cbc3f264f1913c64565543e46c7c65d727,PodSandboxId:c4b2dfe88636fdab06f53f67c72c842d59cff68c8eced35c44ab01acb0f38fc4,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:39dc042e3fc681d32a66f99794aa502b44c509302b0e4cce7ff2b68ef08b2c30,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:889330b3166ec90ac796611be06baa86b3007feb55b284d8d5637a0f93d62270,State:CONTAINER_RUNNING,CreatedAt:1764915010252644835,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-dfcdc64b-xldsm,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 7e85d7c3-139b-44e1-842c-ffd224d7e681,},Annotations:map[string]string{io.kubernetes.container.hash: 49d7edc0,io.kubernetes.container.ports: [{\"name\":\"ht
tp\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42136ceaf78fa393b72e82742cc09827056ffc2940c9cc8c1839d4efdb1f2ecf,PodSandboxId:480b0a31f9ce9e61f31828265324766b8f51efd02d9e97570df1d95db9bbbb8e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1764914955883339570,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 45ff2b50-5b5e-4ab0-b6be-6d89182ace3e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2290b8e505ad6d2ddd672c9abbce48b94ac6e15663834eede07daa84aaed7b,PodSandboxId:4aa4e863ed4d47df76167a085f46ec687784caa58561da170eab9f8afcf35154,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1764914911706103107,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 101b2566-3243-4868-8046-
5629dae282ae,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3ee47592a4b7ded456f7590b8fcae018d9ce4d3b00451be1017cc80ac822a65,PodSandboxId:1aa3358ac2519d5ec2a06c34280d8081e219135df459febeb7094cac8fb46e44,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1764914900470849788,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-n7gr6,io.kubernetes.pod.namespace: ingress-nginx,io.kub
ernetes.pod.uid: e5c23d02-56da-4f60-be78-fd08addc448f,},Annotations:map[string]string{io.kubernetes.container.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:850bd21af7422f2baddbf0c9cd902ee34916daf8e36e1575fc2bb7b008af0313,PodSandboxId:1e0146609f59cbb03892584a790883715ec54804597c9130747067c9c5ce169f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764914851051268395,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-smjbr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 64d0cdcc-290d-4c8b-8d93-57bbc83baf3a,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4529091d6b1b95b5e5c69fe85f21cf4d4389ed8042b038538f633e055eae24c0,PodSandboxId:3ff40b1fb2527b551f0e18bbff3b24a6b7038969e8ebe9f0f620a89c6438e3dd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764914850580544673,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-x7z9b,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 27e5d027-00c4-489f-83c5-98e84414dd6a,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d62bffcd53ab34eac50acf81dbf22699730ab9b214781c98451461b256eb157d,PodSandboxId:d7df2be7b97742620913af675f824cee6465e512635b5884a910da783c64584e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a6
16c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1764914785692558956,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac80d838-4cea-4ae3-ae5a-e78a4e286e73,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4c6d6992afd494369f974d3b3b1035c9c6fec06c28c0898998a090a9e8a2713,PodSandboxId:0979b3767348df96e14affe9ad46f7e3a57365c105f2ffd7418b036203b9d8ca,Metadata:&ContainerMetadata{Name:am
d-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1764914760881310263,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-h9dbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3ad30c3-9f62-45e0-b778-e81884060dcb,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4aa028386737b5adb68bca822bd990a0fbd7dfe510d1dd4cc5ad272e8c3993e,PodSandboxId:535c7c9dab7fef68579d956cae1c6dbf960fbbac585284361082fd3ba7b2
35fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764914759509569959,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7424361-125c-46b2-85d7-08d1d8a280f4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:836fc56864e4d9f70d0cd043ec861551257dfd114107c68f16d304624913478b,PodSandboxId:f9a3a74b47dbdf1df589d1ce4273bb4d91b7b551da8d479235a4bf339e140e5a,Metadat
a:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764914752176956025,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-92z7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a23a376-d669-4854-be9a-0a3835a097bb,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCou
nt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fd72a7b1b3555112cfc8e7beadee61403ee61cb87d468f04deafb1e13ff14df,PodSandboxId:bb126da2a46102305e3c157f56e5c35adbe4676d8bfbe5561c258715f2eddd7c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1764914750889656359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fjwcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aefe502d-6848-492f-b3f3-63135d161647,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60a9eb9b31747dc91ab1819fbd4117f44a2b1f108a50cf4accf641573d8b19f,PodSandboxId:0f7e1b8da7322cc6707f11d0441059b84f4f2543567177d5419759ec66919c04,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1764914739121691040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-704432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e941047cd8ae599454c056528732ba16,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"T
CP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ca2ad1c51c44a217f4b9f3dfac218e9a0c651e2100ae011921ab3564d640e45,PodSandboxId:54226659a9ba2a189b63df48043af45f6c2837791671b36519fe3bced0032176,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1764914739065148664,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-704432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af36c98bb086a092e966b7f514edaf97,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:927678d5be4f7c31a9f6e1b1bdf5361f7b5094418a3db936684a28065e55a95b,PodSandboxId:a2afa4b18570a004120ce333424476e027172576bb9e386283d2484fe0d35f2b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1764914739056011005,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-704432,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 5dca7bcc8e6f36f7abbf0d2b972c021e,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5717db985d7441715010cb014e949e362371d7ce8a89fd89fb0139a23fa852ac,PodSandboxId:b19ae384311b919e1af0680d44d1beb3e3f81cb123628fad1634e167a0f4ba90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1764914738679588818,Labels:map[string]string{io.kubernetes.container.name: kube-sche
duler,io.kubernetes.pod.name: kube-scheduler-addons-704432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf961658ad64085be066bf0988d957b1,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36afc4dd-b286-4dfc-bc02-756a3b8d3917 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 06:11:37 addons-704432 crio[810]: time="2025-12-05 06:11:37.789235981Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a45819cb-839a-469a-a6ee-8b5018eda7ff name=/runtime.v1.RuntimeService/Version
	Dec 05 06:11:37 addons-704432 crio[810]: time="2025-12-05 06:11:37.789564697Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a45819cb-839a-469a-a6ee-8b5018eda7ff name=/runtime.v1.RuntimeService/Version
	Dec 05 06:11:37 addons-704432 crio[810]: time="2025-12-05 06:11:37.792048347Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=844d5729-2cb5-4b5b-9a8f-d4a703b0eefb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 06:11:37 addons-704432 crio[810]: time="2025-12-05 06:11:37.794082650Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764915097793986501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585496,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=844d5729-2cb5-4b5b-9a8f-d4a703b0eefb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 06:11:37 addons-704432 crio[810]: time="2025-12-05 06:11:37.797167001Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a4f43c5-31cb-484d-8682-33adda937c46 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 06:11:37 addons-704432 crio[810]: time="2025-12-05 06:11:37.797239113Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a4f43c5-31cb-484d-8682-33adda937c46 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 06:11:37 addons-704432 crio[810]: time="2025-12-05 06:11:37.797644169Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6899c887817a87cafb509aef59eb8cbc3f264f1913c64565543e46c7c65d727,PodSandboxId:c4b2dfe88636fdab06f53f67c72c842d59cff68c8eced35c44ab01acb0f38fc4,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:39dc042e3fc681d32a66f99794aa502b44c509302b0e4cce7ff2b68ef08b2c30,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:889330b3166ec90ac796611be06baa86b3007feb55b284d8d5637a0f93d62270,State:CONTAINER_RUNNING,CreatedAt:1764915010252644835,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-dfcdc64b-xldsm,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 7e85d7c3-139b-44e1-842c-ffd224d7e681,},Annotations:map[string]string{io.kubernetes.container.hash: 49d7edc0,io.kubernetes.container.ports: [{\"name\":\"ht
tp\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42136ceaf78fa393b72e82742cc09827056ffc2940c9cc8c1839d4efdb1f2ecf,PodSandboxId:480b0a31f9ce9e61f31828265324766b8f51efd02d9e97570df1d95db9bbbb8e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1764914955883339570,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 45ff2b50-5b5e-4ab0-b6be-6d89182ace3e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2290b8e505ad6d2ddd672c9abbce48b94ac6e15663834eede07daa84aaed7b,PodSandboxId:4aa4e863ed4d47df76167a085f46ec687784caa58561da170eab9f8afcf35154,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1764914911706103107,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 101b2566-3243-4868-8046-
5629dae282ae,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3ee47592a4b7ded456f7590b8fcae018d9ce4d3b00451be1017cc80ac822a65,PodSandboxId:1aa3358ac2519d5ec2a06c34280d8081e219135df459febeb7094cac8fb46e44,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1764914900470849788,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-n7gr6,io.kubernetes.pod.namespace: ingress-nginx,io.kub
ernetes.pod.uid: e5c23d02-56da-4f60-be78-fd08addc448f,},Annotations:map[string]string{io.kubernetes.container.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:850bd21af7422f2baddbf0c9cd902ee34916daf8e36e1575fc2bb7b008af0313,PodSandboxId:1e0146609f59cbb03892584a790883715ec54804597c9130747067c9c5ce169f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764914851051268395,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-smjbr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 64d0cdcc-290d-4c8b-8d93-57bbc83baf3a,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4529091d6b1b95b5e5c69fe85f21cf4d4389ed8042b038538f633e055eae24c0,PodSandboxId:3ff40b1fb2527b551f0e18bbff3b24a6b7038969e8ebe9f0f620a89c6438e3dd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764914850580544673,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-x7z9b,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 27e5d027-00c4-489f-83c5-98e84414dd6a,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d62bffcd53ab34eac50acf81dbf22699730ab9b214781c98451461b256eb157d,PodSandboxId:d7df2be7b97742620913af675f824cee6465e512635b5884a910da783c64584e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a6
16c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1764914785692558956,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac80d838-4cea-4ae3-ae5a-e78a4e286e73,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4c6d6992afd494369f974d3b3b1035c9c6fec06c28c0898998a090a9e8a2713,PodSandboxId:0979b3767348df96e14affe9ad46f7e3a57365c105f2ffd7418b036203b9d8ca,Metadata:&ContainerMetadata{Name:am
d-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1764914760881310263,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-h9dbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3ad30c3-9f62-45e0-b778-e81884060dcb,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4aa028386737b5adb68bca822bd990a0fbd7dfe510d1dd4cc5ad272e8c3993e,PodSandboxId:535c7c9dab7fef68579d956cae1c6dbf960fbbac585284361082fd3ba7b2
35fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764914759509569959,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7424361-125c-46b2-85d7-08d1d8a280f4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:836fc56864e4d9f70d0cd043ec861551257dfd114107c68f16d304624913478b,PodSandboxId:f9a3a74b47dbdf1df589d1ce4273bb4d91b7b551da8d479235a4bf339e140e5a,Metadat
a:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764914752176956025,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-92z7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a23a376-d669-4854-be9a-0a3835a097bb,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCou
nt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fd72a7b1b3555112cfc8e7beadee61403ee61cb87d468f04deafb1e13ff14df,PodSandboxId:bb126da2a46102305e3c157f56e5c35adbe4676d8bfbe5561c258715f2eddd7c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1764914750889656359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fjwcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aefe502d-6848-492f-b3f3-63135d161647,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60a9eb9b31747dc91ab1819fbd4117f44a2b1f108a50cf4accf641573d8b19f,PodSandboxId:0f7e1b8da7322cc6707f11d0441059b84f4f2543567177d5419759ec66919c04,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1764914739121691040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-704432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e941047cd8ae599454c056528732ba16,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"T
CP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ca2ad1c51c44a217f4b9f3dfac218e9a0c651e2100ae011921ab3564d640e45,PodSandboxId:54226659a9ba2a189b63df48043af45f6c2837791671b36519fe3bced0032176,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1764914739065148664,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-704432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af36c98bb086a092e966b7f514edaf97,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:927678d5be4f7c31a9f6e1b1bdf5361f7b5094418a3db936684a28065e55a95b,PodSandboxId:a2afa4b18570a004120ce333424476e027172576bb9e386283d2484fe0d35f2b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1764914739056011005,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-704432,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 5dca7bcc8e6f36f7abbf0d2b972c021e,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5717db985d7441715010cb014e949e362371d7ce8a89fd89fb0139a23fa852ac,PodSandboxId:b19ae384311b919e1af0680d44d1beb3e3f81cb123628fad1634e167a0f4ba90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1764914738679588818,Labels:map[string]string{io.kubernetes.container.name: kube-sche
duler,io.kubernetes.pod.name: kube-scheduler-addons-704432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf961658ad64085be066bf0988d957b1,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9a4f43c5-31cb-484d-8682-33adda937c46 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 06:11:37 addons-704432 crio[810]: time="2025-12-05 06:11:37.830348772Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4411a2dc-99a0-44d7-8dc4-5dfd446722ed name=/runtime.v1.RuntimeService/Version
	Dec 05 06:11:37 addons-704432 crio[810]: time="2025-12-05 06:11:37.830468000Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4411a2dc-99a0-44d7-8dc4-5dfd446722ed name=/runtime.v1.RuntimeService/Version
	Dec 05 06:11:37 addons-704432 crio[810]: time="2025-12-05 06:11:37.832031163Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4adb94a0-6d15-4484-b979-446b708e97ea name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 06:11:37 addons-704432 crio[810]: time="2025-12-05 06:11:37.833305628Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764915097833277226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585496,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4adb94a0-6d15-4484-b979-446b708e97ea name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 06:11:37 addons-704432 crio[810]: time="2025-12-05 06:11:37.834415379Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=592c387b-26db-4770-b632-2baf930afea4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 06:11:37 addons-704432 crio[810]: time="2025-12-05 06:11:37.834479347Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=592c387b-26db-4770-b632-2baf930afea4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 06:11:37 addons-704432 crio[810]: time="2025-12-05 06:11:37.834776027Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6899c887817a87cafb509aef59eb8cbc3f264f1913c64565543e46c7c65d727,PodSandboxId:c4b2dfe88636fdab06f53f67c72c842d59cff68c8eced35c44ab01acb0f38fc4,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:39dc042e3fc681d32a66f99794aa502b44c509302b0e4cce7ff2b68ef08b2c30,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:889330b3166ec90ac796611be06baa86b3007feb55b284d8d5637a0f93d62270,State:CONTAINER_RUNNING,CreatedAt:1764915010252644835,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-dfcdc64b-xldsm,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 7e85d7c3-139b-44e1-842c-ffd224d7e681,},Annotations:map[string]string{io.kubernetes.container.hash: 49d7edc0,io.kubernetes.container.ports: [{\"name\":\"ht
tp\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42136ceaf78fa393b72e82742cc09827056ffc2940c9cc8c1839d4efdb1f2ecf,PodSandboxId:480b0a31f9ce9e61f31828265324766b8f51efd02d9e97570df1d95db9bbbb8e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1764914955883339570,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 45ff2b50-5b5e-4ab0-b6be-6d89182ace3e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2290b8e505ad6d2ddd672c9abbce48b94ac6e15663834eede07daa84aaed7b,PodSandboxId:4aa4e863ed4d47df76167a085f46ec687784caa58561da170eab9f8afcf35154,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1764914911706103107,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 101b2566-3243-4868-8046-
5629dae282ae,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3ee47592a4b7ded456f7590b8fcae018d9ce4d3b00451be1017cc80ac822a65,PodSandboxId:1aa3358ac2519d5ec2a06c34280d8081e219135df459febeb7094cac8fb46e44,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1764914900470849788,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-n7gr6,io.kubernetes.pod.namespace: ingress-nginx,io.kub
ernetes.pod.uid: e5c23d02-56da-4f60-be78-fd08addc448f,},Annotations:map[string]string{io.kubernetes.container.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:850bd21af7422f2baddbf0c9cd902ee34916daf8e36e1575fc2bb7b008af0313,PodSandboxId:1e0146609f59cbb03892584a790883715ec54804597c9130747067c9c5ce169f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764914851051268395,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-smjbr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 64d0cdcc-290d-4c8b-8d93-57bbc83baf3a,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4529091d6b1b95b5e5c69fe85f21cf4d4389ed8042b038538f633e055eae24c0,PodSandboxId:3ff40b1fb2527b551f0e18bbff3b24a6b7038969e8ebe9f0f620a89c6438e3dd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764914850580544673,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-x7z9b,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 27e5d027-00c4-489f-83c5-98e84414dd6a,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d62bffcd53ab34eac50acf81dbf22699730ab9b214781c98451461b256eb157d,PodSandboxId:d7df2be7b97742620913af675f824cee6465e512635b5884a910da783c64584e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a6
16c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1764914785692558956,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac80d838-4cea-4ae3-ae5a-e78a4e286e73,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4c6d6992afd494369f974d3b3b1035c9c6fec06c28c0898998a090a9e8a2713,PodSandboxId:0979b3767348df96e14affe9ad46f7e3a57365c105f2ffd7418b036203b9d8ca,Metadata:&ContainerMetadata{Name:am
d-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1764914760881310263,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-h9dbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3ad30c3-9f62-45e0-b778-e81884060dcb,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4aa028386737b5adb68bca822bd990a0fbd7dfe510d1dd4cc5ad272e8c3993e,PodSandboxId:535c7c9dab7fef68579d956cae1c6dbf960fbbac585284361082fd3ba7b2
35fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764914759509569959,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7424361-125c-46b2-85d7-08d1d8a280f4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:836fc56864e4d9f70d0cd043ec861551257dfd114107c68f16d304624913478b,PodSandboxId:f9a3a74b47dbdf1df589d1ce4273bb4d91b7b551da8d479235a4bf339e140e5a,Metadat
a:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764914752176956025,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-92z7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a23a376-d669-4854-be9a-0a3835a097bb,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCou
nt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fd72a7b1b3555112cfc8e7beadee61403ee61cb87d468f04deafb1e13ff14df,PodSandboxId:bb126da2a46102305e3c157f56e5c35adbe4676d8bfbe5561c258715f2eddd7c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1764914750889656359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fjwcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aefe502d-6848-492f-b3f3-63135d161647,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60a9eb9b31747dc91ab1819fbd4117f44a2b1f108a50cf4accf641573d8b19f,PodSandboxId:0f7e1b8da7322cc6707f11d0441059b84f4f2543567177d5419759ec66919c04,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1764914739121691040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-704432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e941047cd8ae599454c056528732ba16,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"T
CP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ca2ad1c51c44a217f4b9f3dfac218e9a0c651e2100ae011921ab3564d640e45,PodSandboxId:54226659a9ba2a189b63df48043af45f6c2837791671b36519fe3bced0032176,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1764914739065148664,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-704432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af36c98bb086a092e966b7f514edaf97,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:927678d5be4f7c31a9f6e1b1bdf5361f7b5094418a3db936684a28065e55a95b,PodSandboxId:a2afa4b18570a004120ce333424476e027172576bb9e386283d2484fe0d35f2b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1764914739056011005,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-704432,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 5dca7bcc8e6f36f7abbf0d2b972c021e,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5717db985d7441715010cb014e949e362371d7ce8a89fd89fb0139a23fa852ac,PodSandboxId:b19ae384311b919e1af0680d44d1beb3e3f81cb123628fad1634e167a0f4ba90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1764914738679588818,Labels:map[string]string{io.kubernetes.container.name: kube-sche
duler,io.kubernetes.pod.name: kube-scheduler-addons-704432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf961658ad64085be066bf0988d957b1,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=592c387b-26db-4770-b632-2baf930afea4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 06:11:37 addons-704432 crio[810]: time="2025-12-05 06:11:37.864790015Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c6c69429-171f-45ec-ba6b-fc684402bdd8 name=/runtime.v1.RuntimeService/Version
	Dec 05 06:11:37 addons-704432 crio[810]: time="2025-12-05 06:11:37.864909224Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c6c69429-171f-45ec-ba6b-fc684402bdd8 name=/runtime.v1.RuntimeService/Version
	Dec 05 06:11:37 addons-704432 crio[810]: time="2025-12-05 06:11:37.866312325Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dfbc9e00-06fa-4d86-9e1f-d8bc659d602f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 06:11:37 addons-704432 crio[810]: time="2025-12-05 06:11:37.867466353Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764915097867440013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585496,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dfbc9e00-06fa-4d86-9e1f-d8bc659d602f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 06:11:37 addons-704432 crio[810]: time="2025-12-05 06:11:37.868305624Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=56759c77-5699-4f80-b786-f2870e54ff13 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 06:11:37 addons-704432 crio[810]: time="2025-12-05 06:11:37.868640442Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=56759c77-5699-4f80-b786-f2870e54ff13 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 06:11:37 addons-704432 crio[810]: time="2025-12-05 06:11:37.869173716Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6899c887817a87cafb509aef59eb8cbc3f264f1913c64565543e46c7c65d727,PodSandboxId:c4b2dfe88636fdab06f53f67c72c842d59cff68c8eced35c44ab01acb0f38fc4,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:39dc042e3fc681d32a66f99794aa502b44c509302b0e4cce7ff2b68ef08b2c30,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:889330b3166ec90ac796611be06baa86b3007feb55b284d8d5637a0f93d62270,State:CONTAINER_RUNNING,CreatedAt:1764915010252644835,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-dfcdc64b-xldsm,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 7e85d7c3-139b-44e1-842c-ffd224d7e681,},Annotations:map[string]string{io.kubernetes.container.hash: 49d7edc0,io.kubernetes.container.ports: [{\"name\":\"ht
tp\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42136ceaf78fa393b72e82742cc09827056ffc2940c9cc8c1839d4efdb1f2ecf,PodSandboxId:480b0a31f9ce9e61f31828265324766b8f51efd02d9e97570df1d95db9bbbb8e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1764914955883339570,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 45ff2b50-5b5e-4ab0-b6be-6d89182ace3e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2290b8e505ad6d2ddd672c9abbce48b94ac6e15663834eede07daa84aaed7b,PodSandboxId:4aa4e863ed4d47df76167a085f46ec687784caa58561da170eab9f8afcf35154,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1764914911706103107,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 101b2566-3243-4868-8046-
5629dae282ae,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3ee47592a4b7ded456f7590b8fcae018d9ce4d3b00451be1017cc80ac822a65,PodSandboxId:1aa3358ac2519d5ec2a06c34280d8081e219135df459febeb7094cac8fb46e44,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1764914900470849788,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-n7gr6,io.kubernetes.pod.namespace: ingress-nginx,io.kub
ernetes.pod.uid: e5c23d02-56da-4f60-be78-fd08addc448f,},Annotations:map[string]string{io.kubernetes.container.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:850bd21af7422f2baddbf0c9cd902ee34916daf8e36e1575fc2bb7b008af0313,PodSandboxId:1e0146609f59cbb03892584a790883715ec54804597c9130747067c9c5ce169f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764914851051268395,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-smjbr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 64d0cdcc-290d-4c8b-8d93-57bbc83baf3a,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4529091d6b1b95b5e5c69fe85f21cf4d4389ed8042b038538f633e055eae24c0,PodSandboxId:3ff40b1fb2527b551f0e18bbff3b24a6b7038969e8ebe9f0f620a89c6438e3dd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764914850580544673,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-x7z9b,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 27e5d027-00c4-489f-83c5-98e84414dd6a,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d62bffcd53ab34eac50acf81dbf22699730ab9b214781c98451461b256eb157d,PodSandboxId:d7df2be7b97742620913af675f824cee6465e512635b5884a910da783c64584e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a6
16c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1764914785692558956,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac80d838-4cea-4ae3-ae5a-e78a4e286e73,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4c6d6992afd494369f974d3b3b1035c9c6fec06c28c0898998a090a9e8a2713,PodSandboxId:0979b3767348df96e14affe9ad46f7e3a57365c105f2ffd7418b036203b9d8ca,Metadata:&ContainerMetadata{Name:am
d-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1764914760881310263,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-h9dbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3ad30c3-9f62-45e0-b778-e81884060dcb,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4aa028386737b5adb68bca822bd990a0fbd7dfe510d1dd4cc5ad272e8c3993e,PodSandboxId:535c7c9dab7fef68579d956cae1c6dbf960fbbac585284361082fd3ba7b2
35fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764914759509569959,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7424361-125c-46b2-85d7-08d1d8a280f4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:836fc56864e4d9f70d0cd043ec861551257dfd114107c68f16d304624913478b,PodSandboxId:f9a3a74b47dbdf1df589d1ce4273bb4d91b7b551da8d479235a4bf339e140e5a,Metadat
a:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764914752176956025,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-92z7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a23a376-d669-4854-be9a-0a3835a097bb,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCou
nt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fd72a7b1b3555112cfc8e7beadee61403ee61cb87d468f04deafb1e13ff14df,PodSandboxId:bb126da2a46102305e3c157f56e5c35adbe4676d8bfbe5561c258715f2eddd7c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1764914750889656359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fjwcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aefe502d-6848-492f-b3f3-63135d161647,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60a9eb9b31747dc91ab1819fbd4117f44a2b1f108a50cf4accf641573d8b19f,PodSandboxId:0f7e1b8da7322cc6707f11d0441059b84f4f2543567177d5419759ec66919c04,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1764914739121691040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-704432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e941047cd8ae599454c056528732ba16,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"T
CP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ca2ad1c51c44a217f4b9f3dfac218e9a0c651e2100ae011921ab3564d640e45,PodSandboxId:54226659a9ba2a189b63df48043af45f6c2837791671b36519fe3bced0032176,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1764914739065148664,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-704432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af36c98bb086a092e966b7f514edaf97,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:927678d5be4f7c31a9f6e1b1bdf5361f7b5094418a3db936684a28065e55a95b,PodSandboxId:a2afa4b18570a004120ce333424476e027172576bb9e386283d2484fe0d35f2b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1764914739056011005,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-704432,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 5dca7bcc8e6f36f7abbf0d2b972c021e,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5717db985d7441715010cb014e949e362371d7ce8a89fd89fb0139a23fa852ac,PodSandboxId:b19ae384311b919e1af0680d44d1beb3e3f81cb123628fad1634e167a0f4ba90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1764914738679588818,Labels:map[string]string{io.kubernetes.container.name: kube-sche
duler,io.kubernetes.pod.name: kube-scheduler-addons-704432,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf961658ad64085be066bf0988d957b1,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=56759c77-5699-4f80-b786-f2870e54ff13 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 06:11:37 addons-704432 crio[810]: time="2025-12-05 06:11:37.885166159Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
	Dec 05 06:11:37 addons-704432 crio[810]: time="2025-12-05 06:11:37.885347759Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
	e6899c887817a       ghcr.io/headlamp-k8s/headlamp@sha256:39dc042e3fc681d32a66f99794aa502b44c509302b0e4cce7ff2b68ef08b2c30                        About a minute ago   Running             headlamp                  0                   c4b2dfe88636f       headlamp-dfcdc64b-xldsm                    headlamp
	42136ceaf78fa       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                              2 minutes ago        Running             nginx                     0                   480b0a31f9ce9       nginx                                      default
	de2290b8e505a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago        Running             busybox                   0                   4aa4e863ed4d4       busybox                                    default
	d3ee47592a4b7       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27             3 minutes ago        Running             controller                0                   1aa3358ac2519       ingress-nginx-controller-6c8bf45fb-n7gr6   ingress-nginx
	850bd21af7422       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                             4 minutes ago        Exited              patch                     1                   1e0146609f59c       ingress-nginx-admission-patch-smjbr        ingress-nginx
	4529091d6b1b9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f   4 minutes ago        Exited              create                    0                   3ff40b1fb2527       ingress-nginx-admission-create-x7z9b       ingress-nginx
	d62bffcd53ab3       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               5 minutes ago        Running             minikube-ingress-dns      0                   d7df2be7b9774       kube-ingress-dns-minikube                  kube-system
	b4c6d6992afd4       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago        Running             amd-gpu-device-plugin     0                   0979b3767348d       amd-gpu-device-plugin-h9dbz                kube-system
	c4aa028386737       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago        Running             storage-provisioner       0                   535c7c9dab7fe       storage-provisioner                        kube-system
	836fc56864e4d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             5 minutes ago        Running             coredns                   0                   f9a3a74b47dbd       coredns-66bc5c9577-92z7v                   kube-system
	7fd72a7b1b355       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                             5 minutes ago        Running             kube-proxy                0                   bb126da2a4610       kube-proxy-fjwcj                           kube-system
	f60a9eb9b3174       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                             5 minutes ago        Running             etcd                      0                   0f7e1b8da7322       etcd-addons-704432                         kube-system
	4ca2ad1c51c44       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                             5 minutes ago        Running             kube-controller-manager   0                   54226659a9ba2       kube-controller-manager-addons-704432      kube-system
	927678d5be4f7       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                             5 minutes ago        Running             kube-apiserver            0                   a2afa4b18570a       kube-apiserver-addons-704432               kube-system
	5717db985d744       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                             5 minutes ago        Running             kube-scheduler            0                   b19ae384311b9       kube-scheduler-addons-704432               kube-system
	
	
	==> coredns [836fc56864e4d9f70d0cd043ec861551257dfd114107c68f16d304624913478b] <==
	[INFO] 10.244.0.8:60558 - 51159 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000121863s
	[INFO] 10.244.0.8:60558 - 10662 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000165456s
	[INFO] 10.244.0.8:60558 - 38542 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000075863s
	[INFO] 10.244.0.8:60558 - 47037 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000105333s
	[INFO] 10.244.0.8:60558 - 30232 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000122965s
	[INFO] 10.244.0.8:60558 - 62227 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000127393s
	[INFO] 10.244.0.8:60558 - 5276 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000071545s
	[INFO] 10.244.0.8:51101 - 65364 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000149629s
	[INFO] 10.244.0.8:51101 - 65041 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000087246s
	[INFO] 10.244.0.8:39134 - 36234 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000137097s
	[INFO] 10.244.0.8:39134 - 36490 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000364591s
	[INFO] 10.244.0.8:46682 - 9800 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000104214s
	[INFO] 10.244.0.8:46682 - 9566 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000187339s
	[INFO] 10.244.0.8:57494 - 20955 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000097485s
	[INFO] 10.244.0.8:57494 - 20519 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000074303s
	[INFO] 10.244.0.23:33818 - 49924 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000692564s
	[INFO] 10.244.0.23:32851 - 21590 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000152357s
	[INFO] 10.244.0.23:50986 - 47657 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000154398s
	[INFO] 10.244.0.23:46679 - 58825 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000110193s
	[INFO] 10.244.0.23:59705 - 7379 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000115224s
	[INFO] 10.244.0.23:52820 - 52802 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000061122s
	[INFO] 10.244.0.23:47750 - 41985 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004087316s
	[INFO] 10.244.0.23:35889 - 47840 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.004299929s
	[INFO] 10.244.0.28:43678 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000455141s
	[INFO] 10.244.0.28:38529 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000278379s
	
	
	==> describe nodes <==
	Name:               addons-704432
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-704432
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45
	                    minikube.k8s.io/name=addons-704432
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_05T06_05_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-704432
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 05 Dec 2025 06:05:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-704432
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 05 Dec 2025 06:11:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 05 Dec 2025 06:10:20 +0000   Fri, 05 Dec 2025 06:05:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 05 Dec 2025 06:10:20 +0000   Fri, 05 Dec 2025 06:05:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 05 Dec 2025 06:10:20 +0000   Fri, 05 Dec 2025 06:05:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 05 Dec 2025 06:10:20 +0000   Fri, 05 Dec 2025 06:05:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.31
	  Hostname:    addons-704432
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	System Info:
	  Machine ID:                 9d1ccf643f5a409b9d6d1dc0517842af
	  System UUID:                9d1ccf64-3f5a-409b-9d6d-1dc0517842af
	  Boot ID:                    258bc730-6a93-4e36-ad1e-a70e29af33ee
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m11s
	  default                     hello-world-app-5d498dc89-mn5w9             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  headlamp                    headlamp-dfcdc64b-xldsm                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-n7gr6    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m39s
	  kube-system                 amd-gpu-device-plugin-h9dbz                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m45s
	  kube-system                 coredns-66bc5c9577-92z7v                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m48s
	  kube-system                 etcd-addons-704432                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m53s
	  kube-system                 kube-apiserver-addons-704432                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m54s
	  kube-system                 kube-controller-manager-addons-704432       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m53s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m42s
	  kube-system                 kube-proxy-fjwcj                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m48s
	  kube-system                 kube-scheduler-addons-704432                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m53s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 5m46s            kube-proxy       
	  Normal  NodeAllocatableEnforced  6m               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m (x8 over 6m)  kubelet          Node addons-704432 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m (x8 over 6m)  kubelet          Node addons-704432 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m (x7 over 6m)  kubelet          Node addons-704432 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m53s            kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m53s            kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m53s            kubelet          Node addons-704432 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m53s            kubelet          Node addons-704432 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m53s            kubelet          Node addons-704432 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m52s            kubelet          Node addons-704432 status is now: NodeReady
	  Normal  RegisteredNode           5m49s            node-controller  Node addons-704432 event: Registered Node addons-704432 in Controller
	  Normal  CIDRAssignmentFailed     5m49s            cidrAllocator    Node addons-704432 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[  +3.265965] kauditd_printk_skb: 354 callbacks suppressed
	[  +5.643253] kauditd_printk_skb: 5 callbacks suppressed
	[ +13.179926] kauditd_printk_skb: 32 callbacks suppressed
	[  +8.739086] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.308191] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.010861] kauditd_printk_skb: 2 callbacks suppressed
	[Dec 5 06:07] kauditd_printk_skb: 131 callbacks suppressed
	[  +3.056530] kauditd_printk_skb: 151 callbacks suppressed
	[Dec 5 06:08] kauditd_printk_skb: 90 callbacks suppressed
	[  +0.000036] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.462621] kauditd_printk_skb: 53 callbacks suppressed
	[  +3.545398] kauditd_printk_skb: 47 callbacks suppressed
	[ +10.486251] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.000088] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.873636] kauditd_printk_skb: 95 callbacks suppressed
	[  +0.000035] kauditd_printk_skb: 68 callbacks suppressed
	[Dec 5 06:09] kauditd_printk_skb: 127 callbacks suppressed
	[  +5.464038] kauditd_printk_skb: 125 callbacks suppressed
	[  +6.306035] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.000056] kauditd_printk_skb: 64 callbacks suppressed
	[  +8.387526] kauditd_printk_skb: 26 callbacks suppressed
	[ +17.759500] kauditd_printk_skb: 10 callbacks suppressed
	[Dec 5 06:10] kauditd_printk_skb: 22 callbacks suppressed
	[  +7.339511] kauditd_printk_skb: 32 callbacks suppressed
	[Dec 5 06:11] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [f60a9eb9b31747dc91ab1819fbd4117f44a2b1f108a50cf4accf641573d8b19f] <==
	{"level":"info","ts":"2025-12-05T06:06:52.396396Z","caller":"traceutil/trace.go:172","msg":"trace[2092413506] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1033; }","duration":"108.820866ms","start":"2025-12-05T06:06:52.287568Z","end":"2025-12-05T06:06:52.396389Z","steps":["trace[2092413506] 'agreement among raft nodes before linearized reading'  (duration: 108.733364ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-05T06:06:52.396498Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"185.229759ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-05T06:06:52.396527Z","caller":"traceutil/trace.go:172","msg":"trace[1165529738] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1033; }","duration":"185.259695ms","start":"2025-12-05T06:06:52.211263Z","end":"2025-12-05T06:06:52.396523Z","steps":["trace[1165529738] 'agreement among raft nodes before linearized reading'  (duration: 185.21983ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-05T06:06:52.395975Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"223.912497ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/servicecidrs\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-05T06:06:52.396691Z","caller":"traceutil/trace.go:172","msg":"trace[809306975] range","detail":"{range_begin:/registry/servicecidrs; range_end:; response_count:0; response_revision:1033; }","duration":"225.369092ms","start":"2025-12-05T06:06:52.171314Z","end":"2025-12-05T06:06:52.396683Z","steps":["trace[809306975] 'agreement among raft nodes before linearized reading'  (duration: 223.835204ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-05T06:07:27.421056Z","caller":"traceutil/trace.go:172","msg":"trace[1069459601] transaction","detail":"{read_only:false; response_revision:1071; number_of_response:1; }","duration":"102.489657ms","start":"2025-12-05T06:07:27.318546Z","end":"2025-12-05T06:07:27.421036Z","steps":["trace[1069459601] 'process raft request'  (duration: 102.399728ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-05T06:07:36.577509Z","caller":"traceutil/trace.go:172","msg":"trace[370428933] transaction","detail":"{read_only:false; response_revision:1149; number_of_response:1; }","duration":"169.168791ms","start":"2025-12-05T06:07:36.408320Z","end":"2025-12-05T06:07:36.577489Z","steps":["trace[370428933] 'process raft request'  (duration: 169.039342ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-05T06:08:29.403787Z","caller":"traceutil/trace.go:172","msg":"trace[1291986988] linearizableReadLoop","detail":"{readStateIndex:1354; appliedIndex:1354; }","duration":"116.341139ms","start":"2025-12-05T06:08:29.287420Z","end":"2025-12-05T06:08:29.403761Z","steps":["trace[1291986988] 'read index received'  (duration: 116.336298ms)","trace[1291986988] 'applied index is now lower than readState.Index'  (duration: 4.321µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-05T06:08:29.403998Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"116.547234ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-05T06:08:29.404025Z","caller":"traceutil/trace.go:172","msg":"trace[985953800] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1302; }","duration":"116.61601ms","start":"2025-12-05T06:08:29.287403Z","end":"2025-12-05T06:08:29.404019Z","steps":["trace[985953800] 'agreement among raft nodes before linearized reading'  (duration: 116.522396ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-05T06:08:29.404633Z","caller":"traceutil/trace.go:172","msg":"trace[1533409762] transaction","detail":"{read_only:false; response_revision:1303; number_of_response:1; }","duration":"148.437151ms","start":"2025-12-05T06:08:29.256186Z","end":"2025-12-05T06:08:29.404623Z","steps":["trace[1533409762] 'process raft request'  (duration: 147.646254ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-05T06:08:57.928429Z","caller":"traceutil/trace.go:172","msg":"trace[1549827448] transaction","detail":"{read_only:false; response_revision:1472; number_of_response:1; }","duration":"121.184659ms","start":"2025-12-05T06:08:57.807218Z","end":"2025-12-05T06:08:57.928403Z","steps":["trace[1549827448] 'process raft request'  (duration: 120.872003ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-05T06:09:03.086332Z","caller":"traceutil/trace.go:172","msg":"trace[155765959] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1516; }","duration":"113.144856ms","start":"2025-12-05T06:09:02.973174Z","end":"2025-12-05T06:09:03.086319Z","steps":["trace[155765959] 'process raft request'  (duration: 73.048306ms)","trace[155765959] 'compare'  (duration: 39.373747ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-05T06:09:03.087342Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.443868ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/local-path-storage/local-path-config\" limit:1 ","response":"range_response_count:1 size:1807"}
	{"level":"info","ts":"2025-12-05T06:09:03.087731Z","caller":"traceutil/trace.go:172","msg":"trace[919142343] range","detail":"{range_begin:/registry/configmaps/local-path-storage/local-path-config; range_end:; response_count:1; response_revision:1515; }","duration":"108.918271ms","start":"2025-12-05T06:09:02.978801Z","end":"2025-12-05T06:09:03.087719Z","steps":["trace[919142343] 'agreement among raft nodes before linearized reading'  (duration: 67.352256ms)","trace[919142343] 'range keys from in-memory index tree'  (duration: 39.479678ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-05T06:10:02.922089Z","caller":"traceutil/trace.go:172","msg":"trace[415724873] transaction","detail":"{read_only:false; response_revision:1828; number_of_response:1; }","duration":"105.462161ms","start":"2025-12-05T06:10:02.816603Z","end":"2025-12-05T06:10:02.922065Z","steps":["trace[415724873] 'process raft request'  (duration: 105.347538ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-05T06:10:19.342088Z","caller":"traceutil/trace.go:172","msg":"trace[803821834] linearizableReadLoop","detail":"{readStateIndex:1974; appliedIndex:1974; }","duration":"103.840232ms","start":"2025-12-05T06:10:19.238232Z","end":"2025-12-05T06:10:19.342072Z","steps":["trace[803821834] 'read index received'  (duration: 103.834918ms)","trace[803821834] 'applied index is now lower than readState.Index'  (duration: 4.349µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-05T06:10:19.342634Z","caller":"traceutil/trace.go:172","msg":"trace[514152118] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1889; }","duration":"112.413483ms","start":"2025-12-05T06:10:19.230211Z","end":"2025-12-05T06:10:19.342624Z","steps":["trace[514152118] 'process raft request'  (duration: 112.334481ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-05T06:10:19.345336Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.224601ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/volumesnapshots.snapshot.storage.k8s.io\" limit:1 ","response":"range_response_count:1 size:36204"}
	{"level":"info","ts":"2025-12-05T06:10:19.345406Z","caller":"traceutil/trace.go:172","msg":"trace[591676835] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/volumesnapshots.snapshot.storage.k8s.io; range_end:; response_count:1; response_revision:1888; }","duration":"107.308987ms","start":"2025-12-05T06:10:19.238090Z","end":"2025-12-05T06:10:19.345399Z","steps":["trace[591676835] 'agreement among raft nodes before linearized reading'  (duration: 104.107925ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-05T06:10:19.346506Z","caller":"traceutil/trace.go:172","msg":"trace[424671013] transaction","detail":"{read_only:false; response_revision:1890; number_of_response:1; }","duration":"100.227409ms","start":"2025-12-05T06:10:19.246269Z","end":"2025-12-05T06:10:19.346497Z","steps":["trace[424671013] 'process raft request'  (duration: 100.11876ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-05T06:10:24.572636Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"146.009306ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.31\" limit:1 ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2025-12-05T06:10:24.572716Z","caller":"traceutil/trace.go:172","msg":"trace[87755420] range","detail":"{range_begin:/registry/masterleases/192.168.39.31; range_end:; response_count:1; response_revision:1990; }","duration":"146.098424ms","start":"2025-12-05T06:10:24.426605Z","end":"2025-12-05T06:10:24.572703Z","steps":["trace[87755420] 'range keys from in-memory index tree'  (duration: 145.828469ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-05T06:10:24.573097Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"174.720037ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/external-snapshotter-runner\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-05T06:10:24.573199Z","caller":"traceutil/trace.go:172","msg":"trace[1229524803] range","detail":"{range_begin:/registry/clusterroles/external-snapshotter-runner; range_end:; response_count:0; response_revision:1990; }","duration":"174.829719ms","start":"2025-12-05T06:10:24.398356Z","end":"2025-12-05T06:10:24.573186Z","steps":["trace[1229524803] 'range keys from in-memory index tree'  (duration: 174.649316ms)"],"step_count":1}
	
	
	==> kernel <==
	 06:11:38 up 6 min,  0 users,  load average: 0.43, 0.93, 0.53
	Linux addons-704432 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [927678d5be4f7c31a9f6e1b1bdf5361f7b5094418a3db936684a28065e55a95b] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1205 06:06:45.835804       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.120.190:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.120.190:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	E1205 06:08:39.039266       1 conn.go:339] Error on socket receive: read tcp 192.168.39.31:8443->192.168.39.1:58308: use of closed network connection
	E1205 06:08:39.237341       1 conn.go:339] Error on socket receive: read tcp 192.168.39.31:8443->192.168.39.1:58346: use of closed network connection
	I1205 06:09:04.937364       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1205 06:09:11.691451       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1205 06:09:11.885404       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.143.66"}
	E1205 06:09:18.296390       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1205 06:09:20.503030       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.127.183"}
	I1205 06:10:19.195813       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 06:10:19.196084       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 06:10:19.228911       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 06:10:19.228970       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 06:10:19.345065       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 06:10:19.345633       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 06:10:19.361427       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 06:10:19.361588       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 06:10:19.392838       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 06:10:19.392922       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1205 06:10:20.355082       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1205 06:10:20.395367       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1205 06:10:20.407073       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1205 06:10:46.859704       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1205 06:11:36.782345       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.147.59"}
	
	
	==> kube-controller-manager [4ca2ad1c51c44a217f4b9f3dfac218e9a0c651e2100ae011921ab3564d640e45] <==
	E1205 06:10:23.906847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1205 06:10:24.106493       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1205 06:10:24.107634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1205 06:10:27.525828       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1205 06:10:27.526968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1205 06:10:29.523051       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1205 06:10:29.524402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1205 06:10:29.590558       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1205 06:10:29.591946       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1205 06:10:38.241760       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1205 06:10:38.243070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1205 06:10:39.170481       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1205 06:10:39.171460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1205 06:10:42.024196       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1205 06:10:42.025360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1205 06:10:56.574330       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1205 06:10:56.575694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1205 06:10:56.915606       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1205 06:10:56.916694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1205 06:10:59.454347       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1205 06:10:59.455401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1205 06:11:27.421413       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1205 06:11:27.422526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1205 06:11:32.699098       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1205 06:11:32.700397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [7fd72a7b1b3555112cfc8e7beadee61403ee61cb87d468f04deafb1e13ff14df] <==
	I1205 06:05:51.486139       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1205 06:05:51.590400       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1205 06:05:51.590433       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.31"]
	E1205 06:05:51.590511       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 06:05:51.678346       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1205 06:05:51.678638       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 06:05:51.678785       1 server_linux.go:132] "Using iptables Proxier"
	I1205 06:05:51.698093       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 06:05:51.698833       1 server.go:527] "Version info" version="v1.34.2"
	I1205 06:05:51.698846       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 06:05:51.712825       1 config.go:200] "Starting service config controller"
	I1205 06:05:51.714007       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1205 06:05:51.714090       1 config.go:106] "Starting endpoint slice config controller"
	I1205 06:05:51.714094       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1205 06:05:51.714180       1 config.go:403] "Starting serviceCIDR config controller"
	I1205 06:05:51.714185       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1205 06:05:51.717136       1 config.go:309] "Starting node config controller"
	I1205 06:05:51.717222       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1205 06:05:51.814967       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1205 06:05:51.815013       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1205 06:05:51.815046       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1205 06:05:51.818141       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [5717db985d7441715010cb014e949e362371d7ce8a89fd89fb0139a23fa852ac] <==
	E1205 06:05:42.708306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1205 06:05:42.708406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1205 06:05:42.709088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1205 06:05:42.710004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1205 06:05:42.711024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1205 06:05:42.711084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1205 06:05:42.711152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1205 06:05:42.711221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1205 06:05:42.711332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1205 06:05:42.711371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1205 06:05:42.711933       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1205 06:05:43.554717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1205 06:05:43.578120       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1205 06:05:43.581276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1205 06:05:43.586332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1205 06:05:43.608352       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1205 06:05:43.679450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1205 06:05:43.770911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1205 06:05:43.776398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1205 06:05:43.780650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1205 06:05:43.836228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1205 06:05:43.932286       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1205 06:05:43.937417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1205 06:05:43.992743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1205 06:05:46.298116       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 05 06:10:22 addons-704432 kubelet[1496]: I1205 06:10:22.491326    1496 scope.go:117] "RemoveContainer" containerID="008bf88767b6e543a511b5817ecb8430126569979a0a3e3d3367749f4a7f4ed1"
	Dec 05 06:10:22 addons-704432 kubelet[1496]: I1205 06:10:22.492001    1496 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"008bf88767b6e543a511b5817ecb8430126569979a0a3e3d3367749f4a7f4ed1"} err="failed to get container status \"008bf88767b6e543a511b5817ecb8430126569979a0a3e3d3367749f4a7f4ed1\": rpc error: code = NotFound desc = could not find container \"008bf88767b6e543a511b5817ecb8430126569979a0a3e3d3367749f4a7f4ed1\": container with ID starting with 008bf88767b6e543a511b5817ecb8430126569979a0a3e3d3367749f4a7f4ed1 not found: ID does not exist"
	Dec 05 06:10:23 addons-704432 kubelet[1496]: I1205 06:10:23.404484    1496 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-h9dbz" secret="" err="secret \"gcp-auth\" not found"
	Dec 05 06:10:23 addons-704432 kubelet[1496]: I1205 06:10:23.412573    1496 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54af9cfe-0ae2-4b54-bf4e-d358f65ed2a8" path="/var/lib/kubelet/pods/54af9cfe-0ae2-4b54-bf4e-d358f65ed2a8/volumes"
	Dec 05 06:10:23 addons-704432 kubelet[1496]: I1205 06:10:23.413152    1496 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="748c450f-fe06-4b00-9886-09a711ab13de" path="/var/lib/kubelet/pods/748c450f-fe06-4b00-9886-09a711ab13de/volumes"
	Dec 05 06:10:23 addons-704432 kubelet[1496]: I1205 06:10:23.413800    1496 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a266d557-3ae6-44e3-9918-02454d484d3d" path="/var/lib/kubelet/pods/a266d557-3ae6-44e3-9918-02454d484d3d/volumes"
	Dec 05 06:10:25 addons-704432 kubelet[1496]: E1205 06:10:25.657845    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764915025657336259 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Dec 05 06:10:25 addons-704432 kubelet[1496]: E1205 06:10:25.657914    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764915025657336259 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Dec 05 06:10:35 addons-704432 kubelet[1496]: E1205 06:10:35.660600    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764915035660273761 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Dec 05 06:10:35 addons-704432 kubelet[1496]: E1205 06:10:35.660644    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764915035660273761 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Dec 05 06:10:45 addons-704432 kubelet[1496]: E1205 06:10:45.663462    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764915045662814974 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Dec 05 06:10:45 addons-704432 kubelet[1496]: E1205 06:10:45.663488    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764915045662814974 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Dec 05 06:10:55 addons-704432 kubelet[1496]: E1205 06:10:55.666469    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764915055666025334 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Dec 05 06:10:55 addons-704432 kubelet[1496]: E1205 06:10:55.666500    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764915055666025334 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Dec 05 06:11:05 addons-704432 kubelet[1496]: E1205 06:11:05.669973    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764915065669525988 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Dec 05 06:11:05 addons-704432 kubelet[1496]: E1205 06:11:05.670030    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764915065669525988 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Dec 05 06:11:15 addons-704432 kubelet[1496]: E1205 06:11:15.672426    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764915075671775794 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Dec 05 06:11:15 addons-704432 kubelet[1496]: E1205 06:11:15.672456    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764915075671775794 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Dec 05 06:11:19 addons-704432 kubelet[1496]: I1205 06:11:19.405124    1496 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 05 06:11:25 addons-704432 kubelet[1496]: E1205 06:11:25.675281    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764915085674775891 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Dec 05 06:11:25 addons-704432 kubelet[1496]: E1205 06:11:25.675304    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764915085674775891 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Dec 05 06:11:33 addons-704432 kubelet[1496]: I1205 06:11:33.404774    1496 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-h9dbz" secret="" err="secret \"gcp-auth\" not found"
	Dec 05 06:11:35 addons-704432 kubelet[1496]: E1205 06:11:35.678082    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764915095677317956 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Dec 05 06:11:35 addons-704432 kubelet[1496]: E1205 06:11:35.678106    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764915095677317956 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585496} inodes_used:{value:192}}"
	Dec 05 06:11:36 addons-704432 kubelet[1496]: I1205 06:11:36.781355    1496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb2sc\" (UniqueName: \"kubernetes.io/projected/70686f5a-3f92-42b7-9413-1eeaadcdd733-kube-api-access-bb2sc\") pod \"hello-world-app-5d498dc89-mn5w9\" (UID: \"70686f5a-3f92-42b7-9413-1eeaadcdd733\") " pod="default/hello-world-app-5d498dc89-mn5w9"
	
	
	==> storage-provisioner [c4aa028386737b5adb68bca822bd990a0fbd7dfe510d1dd4cc5ad272e8c3993e] <==
	W1205 06:11:12.507637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:11:14.512052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:11:14.516963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:11:16.520524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:11:16.525543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:11:18.529143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:11:18.534258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:11:20.538936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:11:20.543395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:11:22.548188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:11:22.554143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:11:24.558271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:11:24.562853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:11:26.566739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:11:26.574976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:11:28.577942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:11:28.583409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:11:30.587093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:11:30.595428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:11:32.598741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:11:32.604921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:11:34.608216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:11:34.613679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:11:36.617230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:11:36.622747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-704432 -n addons-704432
helpers_test.go:269: (dbg) Run:  kubectl --context addons-704432 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-mn5w9 ingress-nginx-admission-create-x7z9b ingress-nginx-admission-patch-smjbr
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-704432 describe pod hello-world-app-5d498dc89-mn5w9 ingress-nginx-admission-create-x7z9b ingress-nginx-admission-patch-smjbr
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-704432 describe pod hello-world-app-5d498dc89-mn5w9 ingress-nginx-admission-create-x7z9b ingress-nginx-admission-patch-smjbr: exit status 1 (82.632451ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-mn5w9
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-704432/192.168.39.31
	Start Time:       Fri, 05 Dec 2025 06:11:36 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bb2sc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bb2sc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-mn5w9 to addons-704432
	  Normal  Pulling    1s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-x7z9b" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-smjbr" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-704432 describe pod hello-world-app-5d498dc89-mn5w9 ingress-nginx-admission-create-x7z9b ingress-nginx-admission-patch-smjbr: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-704432 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-704432 addons disable ingress-dns --alsologtostderr -v=1: (1.065798525s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-704432 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-704432 addons disable ingress --alsologtostderr -v=1: (7.751338259s)
--- FAIL: TestAddons/parallel/Ingress (156.34s)

                                                
                                    
x
+
TestCertExpiration (1074.5s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-809455 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-809455 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (55.148950379s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-809455 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p cert-expiration-809455 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: exit status 80 (13m57.32175672s)

                                                
                                                
-- stdout --
	* [cert-expiration-809455] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-12744/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12744/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "cert-expiration-809455" primary control-plane node in "cert-expiration-809455" cluster
	* Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Certificate client.crt has expired. Generating a new one...
	! Certificate apiserver.crt.d8c97412 has expired. Generating a new one...
	! Certificate proxy-client.crt has expired. Generating a new one...
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001786275s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.61.103:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is healthy after 1.506740299s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001076432s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000803206s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.61.103:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.002213396s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.61.103:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is healthy after 1.506451689s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000800234s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00080655s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.61.103:8443/livez: Get "https://192.168.61.103:8443/livez?timeout=10s": dial tcp 192.168.61.103:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.002213396s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.61.103:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is healthy after 1.506451689s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000800234s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00080655s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.61.103:8443/livez: Get "https://192.168.61.103:8443/livez?timeout=10s": dial tcp 192.168.61.103:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-linux-amd64 start -p cert-expiration-809455 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio" : exit status 80
cert_options_test.go:138: *** TestCertExpiration FAILED at 2025-12-05 07:23:01.355468524 +0000 UTC m=+4687.515167313
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestCertExpiration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p cert-expiration-809455 -n cert-expiration-809455
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p cert-expiration-809455 -n cert-expiration-809455: exit status 2 (194.588647ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestCertExpiration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestCertExpiration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p cert-expiration-809455 logs -n 25
helpers_test.go:260: TestCertExpiration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────┬───────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                      │    PROFILE    │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────┼───────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-550303 sudo iptables -t nat -L -n -v                                 │ bridge-550303 │ jenkins │ v1.37.0 │ 05 Dec 25 07:19 UTC │ 05 Dec 25 07:19 UTC │
	│ ssh     │ -p bridge-550303 sudo systemctl status kubelet --all --full --no-pager         │ bridge-550303 │ jenkins │ v1.37.0 │ 05 Dec 25 07:19 UTC │ 05 Dec 25 07:19 UTC │
	│ ssh     │ -p bridge-550303 sudo systemctl cat kubelet --no-pager                         │ bridge-550303 │ jenkins │ v1.37.0 │ 05 Dec 25 07:19 UTC │ 05 Dec 25 07:19 UTC │
	│ ssh     │ -p bridge-550303 sudo journalctl -xeu kubelet --all --full --no-pager          │ bridge-550303 │ jenkins │ v1.37.0 │ 05 Dec 25 07:19 UTC │ 05 Dec 25 07:19 UTC │
	│ ssh     │ -p bridge-550303 sudo cat /etc/kubernetes/kubelet.conf                         │ bridge-550303 │ jenkins │ v1.37.0 │ 05 Dec 25 07:19 UTC │ 05 Dec 25 07:19 UTC │
	│ ssh     │ -p bridge-550303 sudo cat /var/lib/kubelet/config.yaml                         │ bridge-550303 │ jenkins │ v1.37.0 │ 05 Dec 25 07:19 UTC │ 05 Dec 25 07:19 UTC │
	│ ssh     │ -p bridge-550303 sudo systemctl status docker --all --full --no-pager          │ bridge-550303 │ jenkins │ v1.37.0 │ 05 Dec 25 07:19 UTC │                     │
	│ ssh     │ -p bridge-550303 sudo systemctl cat docker --no-pager                          │ bridge-550303 │ jenkins │ v1.37.0 │ 05 Dec 25 07:19 UTC │ 05 Dec 25 07:19 UTC │
	│ ssh     │ -p bridge-550303 sudo cat /etc/docker/daemon.json                              │ bridge-550303 │ jenkins │ v1.37.0 │ 05 Dec 25 07:19 UTC │ 05 Dec 25 07:19 UTC │
	│ ssh     │ -p bridge-550303 sudo docker system info                                       │ bridge-550303 │ jenkins │ v1.37.0 │ 05 Dec 25 07:19 UTC │                     │
	│ ssh     │ -p bridge-550303 sudo systemctl status cri-docker --all --full --no-pager      │ bridge-550303 │ jenkins │ v1.37.0 │ 05 Dec 25 07:19 UTC │                     │
	│ ssh     │ -p bridge-550303 sudo systemctl cat cri-docker --no-pager                      │ bridge-550303 │ jenkins │ v1.37.0 │ 05 Dec 25 07:19 UTC │ 05 Dec 25 07:19 UTC │
	│ ssh     │ -p bridge-550303 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ bridge-550303 │ jenkins │ v1.37.0 │ 05 Dec 25 07:19 UTC │                     │
	│ ssh     │ -p bridge-550303 sudo cat /usr/lib/systemd/system/cri-docker.service           │ bridge-550303 │ jenkins │ v1.37.0 │ 05 Dec 25 07:19 UTC │ 05 Dec 25 07:19 UTC │
	│ ssh     │ -p bridge-550303 sudo cri-dockerd --version                                    │ bridge-550303 │ jenkins │ v1.37.0 │ 05 Dec 25 07:19 UTC │ 05 Dec 25 07:19 UTC │
	│ ssh     │ -p bridge-550303 sudo systemctl status containerd --all --full --no-pager      │ bridge-550303 │ jenkins │ v1.37.0 │ 05 Dec 25 07:19 UTC │                     │
	│ ssh     │ -p bridge-550303 sudo systemctl cat containerd --no-pager                      │ bridge-550303 │ jenkins │ v1.37.0 │ 05 Dec 25 07:19 UTC │ 05 Dec 25 07:19 UTC │
	│ ssh     │ -p bridge-550303 sudo cat /lib/systemd/system/containerd.service               │ bridge-550303 │ jenkins │ v1.37.0 │ 05 Dec 25 07:19 UTC │ 05 Dec 25 07:19 UTC │
	│ ssh     │ -p bridge-550303 sudo cat /etc/containerd/config.toml                          │ bridge-550303 │ jenkins │ v1.37.0 │ 05 Dec 25 07:19 UTC │ 05 Dec 25 07:19 UTC │
	│ ssh     │ -p bridge-550303 sudo containerd config dump                                   │ bridge-550303 │ jenkins │ v1.37.0 │ 05 Dec 25 07:19 UTC │ 05 Dec 25 07:19 UTC │
	│ ssh     │ -p bridge-550303 sudo systemctl status crio --all --full --no-pager            │ bridge-550303 │ jenkins │ v1.37.0 │ 05 Dec 25 07:19 UTC │ 05 Dec 25 07:19 UTC │
	│ ssh     │ -p bridge-550303 sudo systemctl cat crio --no-pager                            │ bridge-550303 │ jenkins │ v1.37.0 │ 05 Dec 25 07:19 UTC │ 05 Dec 25 07:19 UTC │
	│ ssh     │ -p bridge-550303 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ bridge-550303 │ jenkins │ v1.37.0 │ 05 Dec 25 07:19 UTC │ 05 Dec 25 07:19 UTC │
	│ ssh     │ -p bridge-550303 sudo crio config                                              │ bridge-550303 │ jenkins │ v1.37.0 │ 05 Dec 25 07:19 UTC │ 05 Dec 25 07:19 UTC │
	│ delete  │ -p bridge-550303                                                               │ bridge-550303 │ jenkins │ v1.37.0 │ 05 Dec 25 07:19 UTC │ 05 Dec 25 07:19 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────┴───────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:18:13
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:18:13.530604   60030 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:18:13.530750   60030 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:18:13.530761   60030 out.go:374] Setting ErrFile to fd 2...
	I1205 07:18:13.530768   60030 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:18:13.531000   60030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12744/.minikube/bin
	I1205 07:18:13.531510   60030 out.go:368] Setting JSON to false
	I1205 07:18:13.532526   60030 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7238,"bootTime":1764911855,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 07:18:13.532584   60030 start.go:143] virtualization: kvm guest
	I1205 07:18:13.534937   60030 out.go:179] * [bridge-550303] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 07:18:13.536503   60030 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:18:13.536573   60030 notify.go:221] Checking for updates...
	I1205 07:18:13.539885   60030 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:18:13.541512   60030 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12744/kubeconfig
	I1205 07:18:13.542919   60030 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12744/.minikube
	I1205 07:18:13.544264   60030 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 07:18:13.545634   60030 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:18:13.547808   60030 config.go:182] Loaded profile config "cert-expiration-809455": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:18:13.547937   60030 config.go:182] Loaded profile config "enable-default-cni-550303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:18:13.548070   60030 config.go:182] Loaded profile config "flannel-550303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:18:13.548173   60030 config.go:182] Loaded profile config "guest-902352": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1205 07:18:13.548306   60030 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:18:13.588817   60030 out.go:179] * Using the kvm2 driver based on user configuration
	I1205 07:18:13.590373   60030 start.go:309] selected driver: kvm2
	I1205 07:18:13.590390   60030 start.go:927] validating driver "kvm2" against <nil>
	I1205 07:18:13.590401   60030 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:18:13.591218   60030 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 07:18:13.591476   60030 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:18:13.591508   60030 cni.go:84] Creating CNI manager for "bridge"
	I1205 07:18:13.591516   60030 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 07:18:13.591574   60030 start.go:353] cluster config:
	{Name:bridge-550303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:bridge-550303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1205 07:18:13.591709   60030 iso.go:125] acquiring lock: {Name:mk8940d2199650f8674488213bff178b8d82a626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:18:13.593454   60030 out.go:179] * Starting "bridge-550303" primary control-plane node in "bridge-550303" cluster
	I1205 07:18:13.594717   60030 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 07:18:13.594763   60030 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-12744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1205 07:18:13.594777   60030 cache.go:65] Caching tarball of preloaded images
	I1205 07:18:13.594889   60030 preload.go:238] Found /home/jenkins/minikube-integration/21997-12744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 07:18:13.594900   60030 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1205 07:18:13.594980   60030 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/bridge-550303/config.json ...
	I1205 07:18:13.594998   60030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/bridge-550303/config.json: {Name:mk23d0ecac455aa94617a6faf23d19b9bc406da1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:18:13.595133   60030 start.go:360] acquireMachinesLock for bridge-550303: {Name:mk6f885ffa3cca5ad53a733e47a4c8f74f8579b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 07:18:13.595162   60030 start.go:364] duration metric: took 16.197µs to acquireMachinesLock for "bridge-550303"
	I1205 07:18:13.595179   60030 start.go:93] Provisioning new machine with config: &{Name:bridge-550303 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:bridge-550303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 07:18:13.595224   60030 start.go:125] createHost starting for "" (driver="kvm2")
	W1205 07:18:11.041403   57508 pod_ready.go:104] pod "coredns-66bc5c9577-hxn9h" is not "Ready", error: <nil>
	W1205 07:18:13.540928   57508 pod_ready.go:104] pod "coredns-66bc5c9577-hxn9h" is not "Ready", error: <nil>
	I1205 07:18:10.173064   59006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:18:10.213177   59006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:18:10.249145   59006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 07:18:10.286304   59006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/flannel-550303/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1205 07:18:10.320113   59006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/flannel-550303/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:18:10.352695   59006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/flannel-550303/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:18:10.388111   59006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/flannel-550303/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 07:18:10.428431   59006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/certs/16702.pem --> /usr/share/ca-certificates/16702.pem (1338 bytes)
	I1205 07:18:10.464413   59006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/files/etc/ssl/certs/167022.pem --> /usr/share/ca-certificates/167022.pem (1708 bytes)
	I1205 07:18:10.497898   59006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:18:10.529355   59006 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:18:10.555404   59006 ssh_runner.go:195] Run: openssl version
	I1205 07:18:10.563341   59006 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16702.pem
	I1205 07:18:10.576249   59006 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16702.pem /etc/ssl/certs/16702.pem
	I1205 07:18:10.588603   59006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16702.pem
	I1205 07:18:10.594412   59006 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:17 /usr/share/ca-certificates/16702.pem
	I1205 07:18:10.594478   59006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16702.pem
	I1205 07:18:10.602908   59006 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:18:10.621720   59006 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/16702.pem /etc/ssl/certs/51391683.0
	I1205 07:18:10.640794   59006 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/167022.pem
	I1205 07:18:10.653827   59006 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/167022.pem /etc/ssl/certs/167022.pem
	I1205 07:18:10.675418   59006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167022.pem
	I1205 07:18:10.684072   59006 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:17 /usr/share/ca-certificates/167022.pem
	I1205 07:18:10.684147   59006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167022.pem
	I1205 07:18:10.693077   59006 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:18:10.706264   59006 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/167022.pem /etc/ssl/certs/3ec20f2e.0
	I1205 07:18:10.719371   59006 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:18:10.734009   59006 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:18:10.746032   59006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:18:10.752751   59006 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:05 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:18:10.752819   59006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:18:10.760828   59006 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:18:10.774240   59006 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 07:18:10.789125   59006 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:18:10.794554   59006 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 07:18:10.794626   59006 kubeadm.go:401] StartCluster: {Name:flannel-550303 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:flannel-550303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.83.157 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:18:10.794725   59006 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 07:18:10.794783   59006 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:18:10.838250   59006 cri.go:89] found id: ""
	I1205 07:18:10.838325   59006 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:18:10.853769   59006 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 07:18:10.868617   59006 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:18:10.881144   59006 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:18:10.881168   59006 kubeadm.go:158] found existing configuration files:
	
	I1205 07:18:10.881233   59006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:18:10.892988   59006 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:18:10.893098   59006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:18:10.905515   59006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:18:10.917952   59006 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:18:10.918017   59006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:18:10.936222   59006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:18:10.949608   59006 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:18:10.949669   59006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:18:10.964312   59006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:18:10.976301   59006 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:18:10.976376   59006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:18:10.991334   59006 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 07:18:11.169404   59006 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:18:13.597071   60030 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 07:18:13.597276   60030 start.go:159] libmachine.API.Create for "bridge-550303" (driver="kvm2")
	I1205 07:18:13.597314   60030 client.go:173] LocalClient.Create starting
	I1205 07:18:13.597398   60030 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-12744/.minikube/certs/ca.pem
	I1205 07:18:13.597458   60030 main.go:143] libmachine: Decoding PEM data...
	I1205 07:18:13.597484   60030 main.go:143] libmachine: Parsing certificate...
	I1205 07:18:13.597548   60030 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-12744/.minikube/certs/cert.pem
	I1205 07:18:13.597575   60030 main.go:143] libmachine: Decoding PEM data...
	I1205 07:18:13.597606   60030 main.go:143] libmachine: Parsing certificate...
	I1205 07:18:13.597947   60030 main.go:143] libmachine: creating domain...
	I1205 07:18:13.597959   60030 main.go:143] libmachine: creating network...
	I1205 07:18:13.599472   60030 main.go:143] libmachine: found existing default network
	I1205 07:18:13.599741   60030 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1205 07:18:13.600537   60030 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:08:9d:59} reservation:<nil>}
	I1205 07:18:13.601224   60030 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:fe:29:e1} reservation:<nil>}
	I1205 07:18:13.601827   60030 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:9f:24:7c} reservation:<nil>}
	I1205 07:18:13.602617   60030 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dd8ed0}
	I1205 07:18:13.602713   60030 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-bridge-550303</name>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1205 07:18:13.609511   60030 main.go:143] libmachine: creating private network mk-bridge-550303 192.168.72.0/24...
	I1205 07:18:13.694448   60030 main.go:143] libmachine: private network mk-bridge-550303 192.168.72.0/24 created
	I1205 07:18:13.694873   60030 main.go:143] libmachine: <network>
	  <name>mk-bridge-550303</name>
	  <uuid>bf909d97-2f40-4ccb-a7f0-d25823293c35</uuid>
	  <bridge name='virbr4' stp='on' delay='0'/>
	  <mac address='52:54:00:b9:5a:19'/>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1205 07:18:13.694915   60030 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21997-12744/.minikube/machines/bridge-550303 ...
	I1205 07:18:13.694946   60030 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21997-12744/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1205 07:18:13.694958   60030 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21997-12744/.minikube
	I1205 07:18:13.695034   60030 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21997-12744/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21997-12744/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso...
	I1205 07:18:13.933892   60030 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21997-12744/.minikube/machines/bridge-550303/id_rsa...
	I1205 07:18:13.998994   60030 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21997-12744/.minikube/machines/bridge-550303/bridge-550303.rawdisk...
	I1205 07:18:13.999040   60030 main.go:143] libmachine: Writing magic tar header
	I1205 07:18:13.999076   60030 main.go:143] libmachine: Writing SSH key tar header
	I1205 07:18:13.999185   60030 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21997-12744/.minikube/machines/bridge-550303 ...
	I1205 07:18:13.999279   60030 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21997-12744/.minikube/machines/bridge-550303
	I1205 07:18:13.999309   60030 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21997-12744/.minikube/machines/bridge-550303 (perms=drwx------)
	I1205 07:18:13.999328   60030 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21997-12744/.minikube/machines
	I1205 07:18:13.999348   60030 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21997-12744/.minikube/machines (perms=drwxr-xr-x)
	I1205 07:18:13.999369   60030 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21997-12744/.minikube
	I1205 07:18:13.999388   60030 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21997-12744/.minikube (perms=drwxr-xr-x)
	I1205 07:18:13.999404   60030 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21997-12744
	I1205 07:18:13.999417   60030 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21997-12744 (perms=drwxrwxr-x)
	I1205 07:18:13.999430   60030 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1205 07:18:13.999458   60030 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 07:18:13.999477   60030 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1205 07:18:13.999514   60030 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 07:18:13.999531   60030 main.go:143] libmachine: checking permissions on dir: /home
	I1205 07:18:13.999543   60030 main.go:143] libmachine: skipping /home - not owner
	I1205 07:18:13.999553   60030 main.go:143] libmachine: defining domain...
	I1205 07:18:14.001103   60030 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>bridge-550303</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21997-12744/.minikube/machines/bridge-550303/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21997-12744/.minikube/machines/bridge-550303/bridge-550303.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-bridge-550303'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1205 07:18:14.006362   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:25:ca:c2 in network default
	I1205 07:18:14.007144   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:14.007164   60030 main.go:143] libmachine: starting domain...
	I1205 07:18:14.007170   60030 main.go:143] libmachine: ensuring networks are active...
	I1205 07:18:14.007963   60030 main.go:143] libmachine: Ensuring network default is active
	I1205 07:18:14.008289   60030 main.go:143] libmachine: Ensuring network mk-bridge-550303 is active
	I1205 07:18:14.008894   60030 main.go:143] libmachine: getting domain XML...
	I1205 07:18:14.010068   60030 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>bridge-550303</name>
	  <uuid>61f20c42-44ed-403e-90f5-b2378d79c566</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21997-12744/.minikube/machines/bridge-550303/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21997-12744/.minikube/machines/bridge-550303/bridge-550303.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:af:f9:26'/>
	      <source network='mk-bridge-550303'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:25:ca:c2'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1205 07:18:15.350604   60030 main.go:143] libmachine: waiting for domain to start...
	I1205 07:18:15.352360   60030 main.go:143] libmachine: domain is now running
	I1205 07:18:15.352376   60030 main.go:143] libmachine: waiting for IP...
	I1205 07:18:15.353399   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:15.354160   60030 main.go:143] libmachine: no network interface addresses found for domain bridge-550303 (source=lease)
	I1205 07:18:15.354174   60030 main.go:143] libmachine: trying to list again with source=arp
	I1205 07:18:15.354538   60030 main.go:143] libmachine: unable to find current IP address of domain bridge-550303 in network mk-bridge-550303 (interfaces detected: [])
	I1205 07:18:15.354575   60030 retry.go:31] will retry after 194.654691ms: waiting for domain to come up
	I1205 07:18:15.551183   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:15.552104   60030 main.go:143] libmachine: no network interface addresses found for domain bridge-550303 (source=lease)
	I1205 07:18:15.552130   60030 main.go:143] libmachine: trying to list again with source=arp
	I1205 07:18:15.552521   60030 main.go:143] libmachine: unable to find current IP address of domain bridge-550303 in network mk-bridge-550303 (interfaces detected: [])
	I1205 07:18:15.552551   60030 retry.go:31] will retry after 368.735399ms: waiting for domain to come up
	I1205 07:18:15.923173   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:15.924022   60030 main.go:143] libmachine: no network interface addresses found for domain bridge-550303 (source=lease)
	I1205 07:18:15.924042   60030 main.go:143] libmachine: trying to list again with source=arp
	I1205 07:18:15.924457   60030 main.go:143] libmachine: unable to find current IP address of domain bridge-550303 in network mk-bridge-550303 (interfaces detected: [])
	I1205 07:18:15.924495   60030 retry.go:31] will retry after 416.308138ms: waiting for domain to come up
	I1205 07:18:16.342112   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:16.343008   60030 main.go:143] libmachine: no network interface addresses found for domain bridge-550303 (source=lease)
	I1205 07:18:16.343055   60030 main.go:143] libmachine: trying to list again with source=arp
	I1205 07:18:16.343498   60030 main.go:143] libmachine: unable to find current IP address of domain bridge-550303 in network mk-bridge-550303 (interfaces detected: [])
	I1205 07:18:16.343530   60030 retry.go:31] will retry after 592.708395ms: waiting for domain to come up
	I1205 07:18:16.938012   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:16.938708   60030 main.go:143] libmachine: no network interface addresses found for domain bridge-550303 (source=lease)
	I1205 07:18:16.938728   60030 main.go:143] libmachine: trying to list again with source=arp
	I1205 07:18:16.939146   60030 main.go:143] libmachine: unable to find current IP address of domain bridge-550303 in network mk-bridge-550303 (interfaces detected: [])
	I1205 07:18:16.939184   60030 retry.go:31] will retry after 500.427385ms: waiting for domain to come up
	I1205 07:18:17.441063   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:17.441991   60030 main.go:143] libmachine: no network interface addresses found for domain bridge-550303 (source=lease)
	I1205 07:18:17.442014   60030 main.go:143] libmachine: trying to list again with source=arp
	I1205 07:18:17.442510   60030 main.go:143] libmachine: unable to find current IP address of domain bridge-550303 in network mk-bridge-550303 (interfaces detected: [])
	I1205 07:18:17.442561   60030 retry.go:31] will retry after 740.235936ms: waiting for domain to come up
	I1205 07:18:18.184247   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:18.185130   60030 main.go:143] libmachine: no network interface addresses found for domain bridge-550303 (source=lease)
	I1205 07:18:18.185148   60030 main.go:143] libmachine: trying to list again with source=arp
	I1205 07:18:18.185632   60030 main.go:143] libmachine: unable to find current IP address of domain bridge-550303 in network mk-bridge-550303 (interfaces detected: [])
	I1205 07:18:18.185678   60030 retry.go:31] will retry after 1.128547249s: waiting for domain to come up
	W1205 07:18:16.040872   57508 pod_ready.go:104] pod "coredns-66bc5c9577-hxn9h" is not "Ready", error: <nil>
	W1205 07:18:18.041312   57508 pod_ready.go:104] pod "coredns-66bc5c9577-hxn9h" is not "Ready", error: <nil>
	I1205 07:18:18.539453   57508 pod_ready.go:94] pod "coredns-66bc5c9577-hxn9h" is "Ready"
	I1205 07:18:18.539483   57508 pod_ready.go:86] duration metric: took 33.00636914s for pod "coredns-66bc5c9577-hxn9h" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:18:18.539498   57508 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nx2r9" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:18:18.541767   57508 pod_ready.go:99] pod "coredns-66bc5c9577-nx2r9" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-nx2r9" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-nx2r9" not found
	I1205 07:18:18.541796   57508 pod_ready.go:86] duration metric: took 2.289651ms for pod "coredns-66bc5c9577-nx2r9" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:18:18.546166   57508 pod_ready.go:83] waiting for pod "etcd-enable-default-cni-550303" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:18:18.550973   57508 pod_ready.go:94] pod "etcd-enable-default-cni-550303" is "Ready"
	I1205 07:18:18.551007   57508 pod_ready.go:86] duration metric: took 4.81415ms for pod "etcd-enable-default-cni-550303" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:18:18.554597   57508 pod_ready.go:83] waiting for pod "kube-apiserver-enable-default-cni-550303" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:18:18.561615   57508 pod_ready.go:94] pod "kube-apiserver-enable-default-cni-550303" is "Ready"
	I1205 07:18:18.561644   57508 pod_ready.go:86] duration metric: took 7.017744ms for pod "kube-apiserver-enable-default-cni-550303" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:18:18.564161   57508 pod_ready.go:83] waiting for pod "kube-controller-manager-enable-default-cni-550303" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:18:18.938389   57508 pod_ready.go:94] pod "kube-controller-manager-enable-default-cni-550303" is "Ready"
	I1205 07:18:18.938423   57508 pod_ready.go:86] duration metric: took 374.22761ms for pod "kube-controller-manager-enable-default-cni-550303" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:18:19.140116   57508 pod_ready.go:83] waiting for pod "kube-proxy-mhwrr" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:18:19.538706   57508 pod_ready.go:94] pod "kube-proxy-mhwrr" is "Ready"
	I1205 07:18:19.538744   57508 pod_ready.go:86] duration metric: took 398.593394ms for pod "kube-proxy-mhwrr" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:18:19.739050   57508 pod_ready.go:83] waiting for pod "kube-scheduler-enable-default-cni-550303" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:18:20.138364   57508 pod_ready.go:94] pod "kube-scheduler-enable-default-cni-550303" is "Ready"
	I1205 07:18:20.138396   57508 pod_ready.go:86] duration metric: took 399.312388ms for pod "kube-scheduler-enable-default-cni-550303" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:18:20.138412   57508 pod_ready.go:40] duration metric: took 34.622182246s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:18:20.192099   57508 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1205 07:18:20.193403   57508 out.go:179] * Done! kubectl is now configured to use "enable-default-cni-550303" cluster and "default" namespace by default
	I1205 07:18:19.315674   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:19.316655   60030 main.go:143] libmachine: no network interface addresses found for domain bridge-550303 (source=lease)
	I1205 07:18:19.316676   60030 main.go:143] libmachine: trying to list again with source=arp
	I1205 07:18:19.317106   60030 main.go:143] libmachine: unable to find current IP address of domain bridge-550303 in network mk-bridge-550303 (interfaces detected: [])
	I1205 07:18:19.317147   60030 retry.go:31] will retry after 1.235363068s: waiting for domain to come up
	I1205 07:18:20.554667   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:20.555438   60030 main.go:143] libmachine: no network interface addresses found for domain bridge-550303 (source=lease)
	I1205 07:18:20.555456   60030 main.go:143] libmachine: trying to list again with source=arp
	I1205 07:18:20.555951   60030 main.go:143] libmachine: unable to find current IP address of domain bridge-550303 in network mk-bridge-550303 (interfaces detected: [])
	I1205 07:18:20.555991   60030 retry.go:31] will retry after 1.688314597s: waiting for domain to come up
	I1205 07:18:22.247138   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:22.247926   60030 main.go:143] libmachine: no network interface addresses found for domain bridge-550303 (source=lease)
	I1205 07:18:22.247948   60030 main.go:143] libmachine: trying to list again with source=arp
	I1205 07:18:22.248426   60030 main.go:143] libmachine: unable to find current IP address of domain bridge-550303 in network mk-bridge-550303 (interfaces detected: [])
	I1205 07:18:22.248477   60030 retry.go:31] will retry after 1.614994734s: waiting for domain to come up
	I1205 07:18:24.109107   59006 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1205 07:18:24.109197   59006 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:18:24.109288   59006 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:18:24.109416   59006 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:18:24.109544   59006 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:18:24.109642   59006 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:18:24.111494   59006 out.go:252]   - Generating certificates and keys ...
	I1205 07:18:24.111605   59006 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:18:24.111750   59006 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:18:24.111852   59006 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 07:18:24.111929   59006 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 07:18:24.112012   59006 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 07:18:24.112083   59006 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 07:18:24.112164   59006 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 07:18:24.112317   59006 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [flannel-550303 localhost] and IPs [192.168.83.157 127.0.0.1 ::1]
	I1205 07:18:24.112380   59006 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 07:18:24.112516   59006 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [flannel-550303 localhost] and IPs [192.168.83.157 127.0.0.1 ::1]
	I1205 07:18:24.112599   59006 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 07:18:24.112702   59006 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 07:18:24.112762   59006 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 07:18:24.112838   59006 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:18:24.112907   59006 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:18:24.112989   59006 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:18:24.113065   59006 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:18:24.113161   59006 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:18:24.113240   59006 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:18:24.113343   59006 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:18:24.113433   59006 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:18:24.119824   59006 out.go:252]   - Booting up control plane ...
	I1205 07:18:24.119956   59006 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:18:24.120070   59006 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:18:24.120157   59006 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:18:24.120291   59006 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:18:24.120447   59006 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:18:24.120601   59006 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:18:24.120722   59006 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:18:24.120777   59006 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:18:24.120939   59006 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:18:24.121080   59006 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:18:24.121161   59006 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002420093s
	I1205 07:18:24.121278   59006 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1205 07:18:24.121393   59006 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.83.157:8443/livez
	I1205 07:18:24.121513   59006 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1205 07:18:24.121620   59006 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1205 07:18:24.121735   59006 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.240782201s
	I1205 07:18:24.121819   59006 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.317370011s
	I1205 07:18:24.121912   59006 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.002085854s
	I1205 07:18:24.122060   59006 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 07:18:24.122220   59006 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 07:18:24.122283   59006 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 07:18:24.122552   59006 kubeadm.go:319] [mark-control-plane] Marking the node flannel-550303 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 07:18:24.122636   59006 kubeadm.go:319] [bootstrap-token] Using token: 79sp77.nrrfwx33lfao1vd0
	I1205 07:18:24.124898   59006 out.go:252]   - Configuring RBAC rules ...
	I1205 07:18:24.125057   59006 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 07:18:24.125190   59006 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 07:18:24.125375   59006 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 07:18:24.125558   59006 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 07:18:24.125720   59006 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 07:18:24.125831   59006 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 07:18:24.125976   59006 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 07:18:24.126045   59006 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1205 07:18:24.126125   59006 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1205 07:18:24.126136   59006 kubeadm.go:319] 
	I1205 07:18:24.126196   59006 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1205 07:18:24.126206   59006 kubeadm.go:319] 
	I1205 07:18:24.126291   59006 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1205 07:18:24.126301   59006 kubeadm.go:319] 
	I1205 07:18:24.126334   59006 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1205 07:18:24.126415   59006 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 07:18:24.126487   59006 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 07:18:24.126496   59006 kubeadm.go:319] 
	I1205 07:18:24.126570   59006 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1205 07:18:24.126579   59006 kubeadm.go:319] 
	I1205 07:18:24.126644   59006 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 07:18:24.126653   59006 kubeadm.go:319] 
	I1205 07:18:24.126739   59006 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1205 07:18:24.126842   59006 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 07:18:24.126936   59006 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 07:18:24.126947   59006 kubeadm.go:319] 
	I1205 07:18:24.127066   59006 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 07:18:24.127166   59006 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1205 07:18:24.127179   59006 kubeadm.go:319] 
	I1205 07:18:24.127285   59006 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 79sp77.nrrfwx33lfao1vd0 \
	I1205 07:18:24.127423   59006 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2d0ac5ac5e7ca057303e4430ec89e40d74de77786c64de55c276a16d7451ec23 \
	I1205 07:18:24.127460   59006 kubeadm.go:319] 	--control-plane 
	I1205 07:18:24.127475   59006 kubeadm.go:319] 
	I1205 07:18:24.127582   59006 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1205 07:18:24.127597   59006 kubeadm.go:319] 
	I1205 07:18:24.127738   59006 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 79sp77.nrrfwx33lfao1vd0 \
	I1205 07:18:24.127915   59006 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2d0ac5ac5e7ca057303e4430ec89e40d74de77786c64de55c276a16d7451ec23 
	I1205 07:18:24.127946   59006 cni.go:84] Creating CNI manager for "flannel"
	I1205 07:18:24.129649   59006 out.go:179] * Configuring Flannel (Container Networking Interface) ...
	I1205 07:18:24.130980   59006 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 07:18:24.140898   59006 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1205 07:18:24.140923   59006 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4415 bytes)
	I1205 07:18:24.183607   59006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 07:18:24.823975   59006 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 07:18:24.824133   59006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:18:24.824181   59006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-550303 minikube.k8s.io/updated_at=2025_12_05T07_18_24_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45 minikube.k8s.io/name=flannel-550303 minikube.k8s.io/primary=true
	I1205 07:18:25.027947   59006 ops.go:34] apiserver oom_adj: -16
	I1205 07:18:25.028081   59006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:18:23.865467   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:23.866344   60030 main.go:143] libmachine: no network interface addresses found for domain bridge-550303 (source=lease)
	I1205 07:18:23.866369   60030 main.go:143] libmachine: trying to list again with source=arp
	I1205 07:18:23.866962   60030 main.go:143] libmachine: unable to find current IP address of domain bridge-550303 in network mk-bridge-550303 (interfaces detected: [])
	I1205 07:18:23.867000   60030 retry.go:31] will retry after 1.91554004s: waiting for domain to come up
	I1205 07:18:25.785007   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:25.785840   60030 main.go:143] libmachine: no network interface addresses found for domain bridge-550303 (source=lease)
	I1205 07:18:25.785855   60030 main.go:143] libmachine: trying to list again with source=arp
	I1205 07:18:25.786287   60030 main.go:143] libmachine: unable to find current IP address of domain bridge-550303 in network mk-bridge-550303 (interfaces detected: [])
	I1205 07:18:25.786331   60030 retry.go:31] will retry after 2.718484477s: waiting for domain to come up
	I1205 07:18:28.508253   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:28.508919   60030 main.go:143] libmachine: no network interface addresses found for domain bridge-550303 (source=lease)
	I1205 07:18:28.508941   60030 main.go:143] libmachine: trying to list again with source=arp
	I1205 07:18:28.509350   60030 main.go:143] libmachine: unable to find current IP address of domain bridge-550303 in network mk-bridge-550303 (interfaces detected: [])
	I1205 07:18:28.509386   60030 retry.go:31] will retry after 3.742613467s: waiting for domain to come up
	I1205 07:18:25.528417   59006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:18:26.029060   59006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:18:26.529178   59006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:18:27.028988   59006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:18:27.528700   59006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:18:28.028996   59006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:18:28.528831   59006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:18:29.028327   59006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:18:29.135005   59006 kubeadm.go:1114] duration metric: took 4.310968383s to wait for elevateKubeSystemPrivileges
	I1205 07:18:29.135045   59006 kubeadm.go:403] duration metric: took 18.340427715s to StartCluster
	I1205 07:18:29.135066   59006 settings.go:142] acquiring lock: {Name:mk2f276bdecf61f8264687dd612372cc78cfacbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:18:29.135155   59006 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-12744/kubeconfig
	I1205 07:18:29.136236   59006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12744/kubeconfig: {Name:mka919c4eb7b6e761ae422db15b3daf8c8fde4d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:18:29.136470   59006 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.83.157 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 07:18:29.136491   59006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 07:18:29.136532   59006 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 07:18:29.136642   59006 addons.go:70] Setting storage-provisioner=true in profile "flannel-550303"
	I1205 07:18:29.136669   59006 addons.go:239] Setting addon storage-provisioner=true in "flannel-550303"
	I1205 07:18:29.136694   59006 addons.go:70] Setting default-storageclass=true in profile "flannel-550303"
	I1205 07:18:29.136721   59006 host.go:66] Checking if "flannel-550303" exists ...
	I1205 07:18:29.136733   59006 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "flannel-550303"
	I1205 07:18:29.136702   59006 config.go:182] Loaded profile config "flannel-550303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:18:29.139804   59006 out.go:179] * Verifying Kubernetes components...
	I1205 07:18:29.140519   59006 addons.go:239] Setting addon default-storageclass=true in "flannel-550303"
	I1205 07:18:29.140550   59006 host.go:66] Checking if "flannel-550303" exists ...
	I1205 07:18:29.141080   59006 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:18:29.141114   59006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:18:29.142417   59006 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 07:18:29.142438   59006 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 07:18:29.142732   59006 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:18:29.142748   59006 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 07:18:29.145844   59006 main.go:143] libmachine: domain flannel-550303 has defined MAC address 52:54:00:06:7d:64 in network mk-flannel-550303
	I1205 07:18:29.146080   59006 main.go:143] libmachine: domain flannel-550303 has defined MAC address 52:54:00:06:7d:64 in network mk-flannel-550303
	I1205 07:18:29.146311   59006 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:06:7d:64", ip: ""} in network mk-flannel-550303: {Iface:virbr5 ExpiryTime:2025-12-05 08:18:01 +0000 UTC Type:0 Mac:52:54:00:06:7d:64 Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:flannel-550303 Clientid:01:52:54:00:06:7d:64}
	I1205 07:18:29.146341   59006 main.go:143] libmachine: domain flannel-550303 has defined IP address 192.168.83.157 and MAC address 52:54:00:06:7d:64 in network mk-flannel-550303
	I1205 07:18:29.146549   59006 sshutil.go:53] new ssh client: &{IP:192.168.83.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/flannel-550303/id_rsa Username:docker}
	I1205 07:18:29.146783   59006 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:06:7d:64", ip: ""} in network mk-flannel-550303: {Iface:virbr5 ExpiryTime:2025-12-05 08:18:01 +0000 UTC Type:0 Mac:52:54:00:06:7d:64 Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:flannel-550303 Clientid:01:52:54:00:06:7d:64}
	I1205 07:18:29.146821   59006 main.go:143] libmachine: domain flannel-550303 has defined IP address 192.168.83.157 and MAC address 52:54:00:06:7d:64 in network mk-flannel-550303
	I1205 07:18:29.147021   59006 sshutil.go:53] new ssh client: &{IP:192.168.83.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/flannel-550303/id_rsa Username:docker}
	I1205 07:18:29.426136   59006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 07:18:29.501403   59006 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:18:29.637347   59006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:18:29.691541   59006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:18:30.215243   59006 start.go:977] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS's ConfigMap
	I1205 07:18:30.216601   59006 node_ready.go:35] waiting up to 15m0s for node "flannel-550303" to be "Ready" ...
	I1205 07:18:30.492459   59006 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1205 07:18:32.256247   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:32.257040   60030 main.go:143] libmachine: domain bridge-550303 has current primary IP address 192.168.72.252 and MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:32.257062   60030 main.go:143] libmachine: found domain IP: 192.168.72.252
	I1205 07:18:32.257072   60030 main.go:143] libmachine: reserving static IP address...
	I1205 07:18:32.257728   60030 main.go:143] libmachine: unable to find host DHCP lease matching {name: "bridge-550303", mac: "52:54:00:af:f9:26", ip: "192.168.72.252"} in network mk-bridge-550303
	I1205 07:18:32.503432   60030 main.go:143] libmachine: reserved static IP address 192.168.72.252 for domain bridge-550303
	I1205 07:18:32.503453   60030 main.go:143] libmachine: waiting for SSH...
	I1205 07:18:32.503460   60030 main.go:143] libmachine: Getting to WaitForSSH function...
	I1205 07:18:32.507085   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:32.507562   60030 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:af:f9:26", ip: ""} in network mk-bridge-550303: {Iface:virbr4 ExpiryTime:2025-12-05 08:18:30 +0000 UTC Type:0 Mac:52:54:00:af:f9:26 Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:minikube Clientid:01:52:54:00:af:f9:26}
	I1205 07:18:32.507587   60030 main.go:143] libmachine: domain bridge-550303 has defined IP address 192.168.72.252 and MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:32.507883   60030 main.go:143] libmachine: Using SSH client type: native
	I1205 07:18:32.508159   60030 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.72.252 22 <nil> <nil>}
	I1205 07:18:32.508171   60030 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1205 07:18:32.614557   60030 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:18:32.615001   60030 main.go:143] libmachine: domain creation complete
	I1205 07:18:32.616878   60030 machine.go:94] provisionDockerMachine start ...
	I1205 07:18:32.619577   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:32.620037   60030 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:af:f9:26", ip: ""} in network mk-bridge-550303: {Iface:virbr4 ExpiryTime:2025-12-05 08:18:30 +0000 UTC Type:0 Mac:52:54:00:af:f9:26 Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:bridge-550303 Clientid:01:52:54:00:af:f9:26}
	I1205 07:18:32.620060   60030 main.go:143] libmachine: domain bridge-550303 has defined IP address 192.168.72.252 and MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:32.620217   60030 main.go:143] libmachine: Using SSH client type: native
	I1205 07:18:32.620409   60030 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.72.252 22 <nil> <nil>}
	I1205 07:18:32.620420   60030 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:18:32.727888   60030 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 07:18:32.727914   60030 buildroot.go:166] provisioning hostname "bridge-550303"
	I1205 07:18:32.731009   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:32.731551   60030 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:af:f9:26", ip: ""} in network mk-bridge-550303: {Iface:virbr4 ExpiryTime:2025-12-05 08:18:30 +0000 UTC Type:0 Mac:52:54:00:af:f9:26 Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:bridge-550303 Clientid:01:52:54:00:af:f9:26}
	I1205 07:18:32.731589   60030 main.go:143] libmachine: domain bridge-550303 has defined IP address 192.168.72.252 and MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:32.731831   60030 main.go:143] libmachine: Using SSH client type: native
	I1205 07:18:32.732107   60030 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.72.252 22 <nil> <nil>}
	I1205 07:18:32.732125   60030 main.go:143] libmachine: About to run SSH command:
	sudo hostname bridge-550303 && echo "bridge-550303" | sudo tee /etc/hostname
	I1205 07:18:32.860203   60030 main.go:143] libmachine: SSH cmd err, output: <nil>: bridge-550303
	
	I1205 07:18:32.863344   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:32.863813   60030 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:af:f9:26", ip: ""} in network mk-bridge-550303: {Iface:virbr4 ExpiryTime:2025-12-05 08:18:30 +0000 UTC Type:0 Mac:52:54:00:af:f9:26 Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:bridge-550303 Clientid:01:52:54:00:af:f9:26}
	I1205 07:18:32.863840   60030 main.go:143] libmachine: domain bridge-550303 has defined IP address 192.168.72.252 and MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:32.864016   60030 main.go:143] libmachine: Using SSH client type: native
	I1205 07:18:32.864207   60030 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.72.252 22 <nil> <nil>}
	I1205 07:18:32.864224   60030 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-550303' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-550303/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-550303' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:18:32.979464   60030 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:18:32.979498   60030 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21997-12744/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-12744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-12744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-12744/.minikube}
	I1205 07:18:32.979526   60030 buildroot.go:174] setting up certificates
	I1205 07:18:32.979537   60030 provision.go:84] configureAuth start
	I1205 07:18:32.983569   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:32.984174   60030 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:af:f9:26", ip: ""} in network mk-bridge-550303: {Iface:virbr4 ExpiryTime:2025-12-05 08:18:30 +0000 UTC Type:0 Mac:52:54:00:af:f9:26 Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:bridge-550303 Clientid:01:52:54:00:af:f9:26}
	I1205 07:18:32.984205   60030 main.go:143] libmachine: domain bridge-550303 has defined IP address 192.168.72.252 and MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:32.987904   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:32.988423   60030 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:af:f9:26", ip: ""} in network mk-bridge-550303: {Iface:virbr4 ExpiryTime:2025-12-05 08:18:30 +0000 UTC Type:0 Mac:52:54:00:af:f9:26 Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:bridge-550303 Clientid:01:52:54:00:af:f9:26}
	I1205 07:18:32.988459   60030 main.go:143] libmachine: domain bridge-550303 has defined IP address 192.168.72.252 and MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:32.988703   60030 provision.go:143] copyHostCerts
	I1205 07:18:32.988772   60030 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12744/.minikube/ca.pem, removing ...
	I1205 07:18:32.988786   60030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12744/.minikube/ca.pem
	I1205 07:18:32.988853   60030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-12744/.minikube/ca.pem (1078 bytes)
	I1205 07:18:32.988984   60030 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12744/.minikube/cert.pem, removing ...
	I1205 07:18:32.988998   60030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12744/.minikube/cert.pem
	I1205 07:18:32.989041   60030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-12744/.minikube/cert.pem (1123 bytes)
	I1205 07:18:32.989129   60030 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12744/.minikube/key.pem, removing ...
	I1205 07:18:32.989141   60030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12744/.minikube/key.pem
	I1205 07:18:32.989177   60030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-12744/.minikube/key.pem (1675 bytes)
	I1205 07:18:32.989273   60030 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-12744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-12744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-12744/.minikube/certs/ca-key.pem org=jenkins.bridge-550303 san=[127.0.0.1 192.168.72.252 bridge-550303 localhost minikube]
	I1205 07:18:33.048002   60030 provision.go:177] copyRemoteCerts
	I1205 07:18:33.048064   60030 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:18:33.051530   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:33.052034   60030 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:af:f9:26", ip: ""} in network mk-bridge-550303: {Iface:virbr4 ExpiryTime:2025-12-05 08:18:30 +0000 UTC Type:0 Mac:52:54:00:af:f9:26 Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:bridge-550303 Clientid:01:52:54:00:af:f9:26}
	I1205 07:18:33.052104   60030 main.go:143] libmachine: domain bridge-550303 has defined IP address 192.168.72.252 and MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:33.052291   60030 sshutil.go:53] new ssh client: &{IP:192.168.72.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/bridge-550303/id_rsa Username:docker}
	I1205 07:18:33.137954   60030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 07:18:33.179762   60030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 07:18:33.219274   60030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 07:18:33.258647   60030 provision.go:87] duration metric: took 279.093367ms to configureAuth
	I1205 07:18:33.258695   60030 buildroot.go:189] setting minikube options for container-runtime
	I1205 07:18:33.258914   60030 config.go:182] Loaded profile config "bridge-550303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:18:33.262722   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:33.263318   60030 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:af:f9:26", ip: ""} in network mk-bridge-550303: {Iface:virbr4 ExpiryTime:2025-12-05 08:18:30 +0000 UTC Type:0 Mac:52:54:00:af:f9:26 Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:bridge-550303 Clientid:01:52:54:00:af:f9:26}
	I1205 07:18:33.263365   60030 main.go:143] libmachine: domain bridge-550303 has defined IP address 192.168.72.252 and MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:33.263712   60030 main.go:143] libmachine: Using SSH client type: native
	I1205 07:18:33.263995   60030 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.72.252 22 <nil> <nil>}
	I1205 07:18:33.264022   60030 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 07:18:33.518117   60030 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 07:18:33.518173   60030 machine.go:97] duration metric: took 901.257913ms to provisionDockerMachine
	I1205 07:18:33.518184   60030 client.go:176] duration metric: took 19.920860665s to LocalClient.Create
	I1205 07:18:33.518204   60030 start.go:167] duration metric: took 19.920928457s to libmachine.API.Create "bridge-550303"
	I1205 07:18:33.518214   60030 start.go:293] postStartSetup for "bridge-550303" (driver="kvm2")
	I1205 07:18:33.518236   60030 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:18:33.518305   60030 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:18:33.521970   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:33.522588   60030 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:af:f9:26", ip: ""} in network mk-bridge-550303: {Iface:virbr4 ExpiryTime:2025-12-05 08:18:30 +0000 UTC Type:0 Mac:52:54:00:af:f9:26 Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:bridge-550303 Clientid:01:52:54:00:af:f9:26}
	I1205 07:18:33.522622   60030 main.go:143] libmachine: domain bridge-550303 has defined IP address 192.168.72.252 and MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:33.522865   60030 sshutil.go:53] new ssh client: &{IP:192.168.72.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/bridge-550303/id_rsa Username:docker}
	I1205 07:18:30.493770   59006 addons.go:530] duration metric: took 1.357235457s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1205 07:18:30.723267   59006 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-550303" context rescaled to 1 replicas
	W1205 07:18:32.220349   59006 node_ready.go:57] node "flannel-550303" has "Ready":"False" status (will retry)
	W1205 07:18:34.724578   59006 node_ready.go:57] node "flannel-550303" has "Ready":"False" status (will retry)
	I1205 07:18:33.611296   60030 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:18:33.617905   60030 info.go:137] Remote host: Buildroot 2025.02
	I1205 07:18:33.617936   60030 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12744/.minikube/addons for local assets ...
	I1205 07:18:33.618095   60030 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12744/.minikube/files for local assets ...
	I1205 07:18:33.618227   60030 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-12744/.minikube/files/etc/ssl/certs/167022.pem -> 167022.pem in /etc/ssl/certs
	I1205 07:18:33.618344   60030 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:18:33.631215   60030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/files/etc/ssl/certs/167022.pem --> /etc/ssl/certs/167022.pem (1708 bytes)
	I1205 07:18:33.668205   60030 start.go:296] duration metric: took 149.967423ms for postStartSetup
	I1205 07:18:33.671786   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:33.672357   60030 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:af:f9:26", ip: ""} in network mk-bridge-550303: {Iface:virbr4 ExpiryTime:2025-12-05 08:18:30 +0000 UTC Type:0 Mac:52:54:00:af:f9:26 Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:bridge-550303 Clientid:01:52:54:00:af:f9:26}
	I1205 07:18:33.672392   60030 main.go:143] libmachine: domain bridge-550303 has defined IP address 192.168.72.252 and MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:33.672653   60030 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/bridge-550303/config.json ...
	I1205 07:18:33.672974   60030 start.go:128] duration metric: took 20.077729877s to createHost
	I1205 07:18:33.675479   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:33.675899   60030 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:af:f9:26", ip: ""} in network mk-bridge-550303: {Iface:virbr4 ExpiryTime:2025-12-05 08:18:30 +0000 UTC Type:0 Mac:52:54:00:af:f9:26 Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:bridge-550303 Clientid:01:52:54:00:af:f9:26}
	I1205 07:18:33.675925   60030 main.go:143] libmachine: domain bridge-550303 has defined IP address 192.168.72.252 and MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:33.676140   60030 main.go:143] libmachine: Using SSH client type: native
	I1205 07:18:33.676327   60030 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.72.252 22 <nil> <nil>}
	I1205 07:18:33.676337   60030 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1205 07:18:33.785559   60030 main.go:143] libmachine: SSH cmd err, output: <nil>: 1764919113.752671997
	
	I1205 07:18:33.785605   60030 fix.go:216] guest clock: 1764919113.752671997
	I1205 07:18:33.785618   60030 fix.go:229] Guest: 2025-12-05 07:18:33.752671997 +0000 UTC Remote: 2025-12-05 07:18:33.672990878 +0000 UTC m=+20.191676311 (delta=79.681119ms)
	I1205 07:18:33.785642   60030 fix.go:200] guest clock delta is within tolerance: 79.681119ms
	I1205 07:18:33.785649   60030 start.go:83] releasing machines lock for "bridge-550303", held for 20.190478295s
	I1205 07:18:33.789233   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:33.789782   60030 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:af:f9:26", ip: ""} in network mk-bridge-550303: {Iface:virbr4 ExpiryTime:2025-12-05 08:18:30 +0000 UTC Type:0 Mac:52:54:00:af:f9:26 Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:bridge-550303 Clientid:01:52:54:00:af:f9:26}
	I1205 07:18:33.789816   60030 main.go:143] libmachine: domain bridge-550303 has defined IP address 192.168.72.252 and MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:33.790558   60030 ssh_runner.go:195] Run: cat /version.json
	I1205 07:18:33.790666   60030 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:18:33.794605   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:33.794755   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:33.795150   60030 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:af:f9:26", ip: ""} in network mk-bridge-550303: {Iface:virbr4 ExpiryTime:2025-12-05 08:18:30 +0000 UTC Type:0 Mac:52:54:00:af:f9:26 Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:bridge-550303 Clientid:01:52:54:00:af:f9:26}
	I1205 07:18:33.795217   60030 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:af:f9:26", ip: ""} in network mk-bridge-550303: {Iface:virbr4 ExpiryTime:2025-12-05 08:18:30 +0000 UTC Type:0 Mac:52:54:00:af:f9:26 Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:bridge-550303 Clientid:01:52:54:00:af:f9:26}
	I1205 07:18:33.795248   60030 main.go:143] libmachine: domain bridge-550303 has defined IP address 192.168.72.252 and MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:33.795293   60030 main.go:143] libmachine: domain bridge-550303 has defined IP address 192.168.72.252 and MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:33.795503   60030 sshutil.go:53] new ssh client: &{IP:192.168.72.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/bridge-550303/id_rsa Username:docker}
	I1205 07:18:33.795730   60030 sshutil.go:53] new ssh client: &{IP:192.168.72.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/bridge-550303/id_rsa Username:docker}
	I1205 07:18:33.880455   60030 ssh_runner.go:195] Run: systemctl --version
	I1205 07:18:33.907662   60030 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 07:18:34.074181   60030 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:18:34.082472   60030 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:18:34.082553   60030 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:18:34.107964   60030 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 07:18:34.107994   60030 start.go:496] detecting cgroup driver to use...
	I1205 07:18:34.108075   60030 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 07:18:34.128697   60030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 07:18:34.148949   60030 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:18:34.149023   60030 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:18:34.170481   60030 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:18:34.188994   60030 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:18:34.364076   60030 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:18:34.584559   60030 docker.go:234] disabling docker service ...
	I1205 07:18:34.584639   60030 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:18:34.608256   60030 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:18:34.624890   60030 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:18:34.791796   60030 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:18:34.948757   60030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:18:34.964585   60030 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:18:34.990002   60030 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 07:18:34.990148   60030 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:18:35.008636   60030 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 07:18:35.008732   60030 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:18:35.025987   60030 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:18:35.043332   60030 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:18:35.060123   60030 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:18:35.074854   60030 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:18:35.087828   60030 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:18:35.112965   60030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:18:35.126794   60030 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:18:35.137448   60030 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 07:18:35.137517   60030 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 07:18:35.160442   60030 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:18:35.175156   60030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:18:35.345824   60030 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 07:18:35.481928   60030 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 07:18:35.482007   60030 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 07:18:35.488057   60030 start.go:564] Will wait 60s for crictl version
	I1205 07:18:35.488148   60030 ssh_runner.go:195] Run: which crictl
	I1205 07:18:35.492213   60030 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 07:18:35.529443   60030 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 07:18:35.529546   60030 ssh_runner.go:195] Run: crio --version
	I1205 07:18:35.561438   60030 ssh_runner.go:195] Run: crio --version
	I1205 07:18:35.594317   60030 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1205 07:18:35.598736   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:35.599218   60030 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:af:f9:26", ip: ""} in network mk-bridge-550303: {Iface:virbr4 ExpiryTime:2025-12-05 08:18:30 +0000 UTC Type:0 Mac:52:54:00:af:f9:26 Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:bridge-550303 Clientid:01:52:54:00:af:f9:26}
	I1205 07:18:35.599244   60030 main.go:143] libmachine: domain bridge-550303 has defined IP address 192.168.72.252 and MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:35.599466   60030 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1205 07:18:35.604266   60030 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:18:35.620713   60030 kubeadm.go:884] updating cluster {Name:bridge-550303 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:bridge-550303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.252 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:18:35.620871   60030 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 07:18:35.620929   60030 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:18:35.651522   60030 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1205 07:18:35.651594   60030 ssh_runner.go:195] Run: which lz4
	I1205 07:18:35.657732   60030 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 07:18:35.663422   60030 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 07:18:35.663463   60030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1205 07:18:37.000088   60030 crio.go:462] duration metric: took 1.342401866s to copy over tarball
	I1205 07:18:37.000166   60030 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 07:18:35.723929   59006 node_ready.go:49] node "flannel-550303" is "Ready"
	I1205 07:18:35.723958   59006 node_ready.go:38] duration metric: took 5.507323326s for node "flannel-550303" to be "Ready" ...
	I1205 07:18:35.723972   59006 api_server.go:52] waiting for apiserver process to appear ...
	I1205 07:18:35.724023   59006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:18:35.773172   59006 api_server.go:72] duration metric: took 6.636661415s to wait for apiserver process to appear ...
	I1205 07:18:35.773204   59006 api_server.go:88] waiting for apiserver healthz status ...
	I1205 07:18:35.773221   59006 api_server.go:253] Checking apiserver healthz at https://192.168.83.157:8443/healthz ...
	I1205 07:18:35.779168   59006 api_server.go:279] https://192.168.83.157:8443/healthz returned 200:
	ok
	I1205 07:18:35.781140   59006 api_server.go:141] control plane version: v1.34.2
	I1205 07:18:35.781177   59006 api_server.go:131] duration metric: took 7.964259ms to wait for apiserver health ...
	I1205 07:18:35.781190   59006 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 07:18:35.798080   59006 system_pods.go:59] 7 kube-system pods found
	I1205 07:18:35.798152   59006 system_pods.go:61] "coredns-66bc5c9577-zl5kn" [14b24589-7009-4a9b-ac67-61706cd7e467] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:18:35.798170   59006 system_pods.go:61] "etcd-flannel-550303" [7b6a9d53-ca92-467f-b068-5a1afaea8690] Running
	I1205 07:18:35.798181   59006 system_pods.go:61] "kube-apiserver-flannel-550303" [be691a85-5e62-4aab-a823-68c1c2013c76] Running
	I1205 07:18:35.798189   59006 system_pods.go:61] "kube-controller-manager-flannel-550303" [c55715b5-5479-4d3f-81e1-44a40234cf9c] Running
	I1205 07:18:35.798203   59006 system_pods.go:61] "kube-proxy-cl4z8" [7f1ff15f-1853-44df-8885-d99528fd197f] Running
	I1205 07:18:35.798212   59006 system_pods.go:61] "kube-scheduler-flannel-550303" [1c19495d-a875-4217-a91d-90f4f0a9467f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:18:35.798226   59006 system_pods.go:61] "storage-provisioner" [8f1cefad-6018-4b0a-bdf3-09d7c0a7a473] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 07:18:35.798236   59006 system_pods.go:74] duration metric: took 17.038892ms to wait for pod list to return data ...
	I1205 07:18:35.798263   59006 default_sa.go:34] waiting for default service account to be created ...
	I1205 07:18:35.808084   59006 default_sa.go:45] found service account: "default"
	I1205 07:18:35.808118   59006 default_sa.go:55] duration metric: took 9.846073ms for default service account to be created ...
	I1205 07:18:35.808132   59006 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 07:18:35.812542   59006 system_pods.go:86] 7 kube-system pods found
	I1205 07:18:35.812580   59006 system_pods.go:89] "coredns-66bc5c9577-zl5kn" [14b24589-7009-4a9b-ac67-61706cd7e467] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:18:35.812589   59006 system_pods.go:89] "etcd-flannel-550303" [7b6a9d53-ca92-467f-b068-5a1afaea8690] Running
	I1205 07:18:35.812598   59006 system_pods.go:89] "kube-apiserver-flannel-550303" [be691a85-5e62-4aab-a823-68c1c2013c76] Running
	I1205 07:18:35.812604   59006 system_pods.go:89] "kube-controller-manager-flannel-550303" [c55715b5-5479-4d3f-81e1-44a40234cf9c] Running
	I1205 07:18:35.812610   59006 system_pods.go:89] "kube-proxy-cl4z8" [7f1ff15f-1853-44df-8885-d99528fd197f] Running
	I1205 07:18:35.812619   59006 system_pods.go:89] "kube-scheduler-flannel-550303" [1c19495d-a875-4217-a91d-90f4f0a9467f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:18:35.812629   59006 system_pods.go:89] "storage-provisioner" [8f1cefad-6018-4b0a-bdf3-09d7c0a7a473] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 07:18:35.812656   59006 retry.go:31] will retry after 226.179807ms: missing components: kube-dns
	I1205 07:18:36.048190   59006 system_pods.go:86] 7 kube-system pods found
	I1205 07:18:36.048236   59006 system_pods.go:89] "coredns-66bc5c9577-zl5kn" [14b24589-7009-4a9b-ac67-61706cd7e467] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:18:36.048245   59006 system_pods.go:89] "etcd-flannel-550303" [7b6a9d53-ca92-467f-b068-5a1afaea8690] Running
	I1205 07:18:36.048257   59006 system_pods.go:89] "kube-apiserver-flannel-550303" [be691a85-5e62-4aab-a823-68c1c2013c76] Running
	I1205 07:18:36.048267   59006 system_pods.go:89] "kube-controller-manager-flannel-550303" [c55715b5-5479-4d3f-81e1-44a40234cf9c] Running
	I1205 07:18:36.048277   59006 system_pods.go:89] "kube-proxy-cl4z8" [7f1ff15f-1853-44df-8885-d99528fd197f] Running
	I1205 07:18:36.048285   59006 system_pods.go:89] "kube-scheduler-flannel-550303" [1c19495d-a875-4217-a91d-90f4f0a9467f] Running
	I1205 07:18:36.048294   59006 system_pods.go:89] "storage-provisioner" [8f1cefad-6018-4b0a-bdf3-09d7c0a7a473] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 07:18:36.048315   59006 retry.go:31] will retry after 374.101565ms: missing components: kube-dns
	I1205 07:18:36.428239   59006 system_pods.go:86] 7 kube-system pods found
	I1205 07:18:36.428278   59006 system_pods.go:89] "coredns-66bc5c9577-zl5kn" [14b24589-7009-4a9b-ac67-61706cd7e467] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:18:36.428288   59006 system_pods.go:89] "etcd-flannel-550303" [7b6a9d53-ca92-467f-b068-5a1afaea8690] Running
	I1205 07:18:36.428296   59006 system_pods.go:89] "kube-apiserver-flannel-550303" [be691a85-5e62-4aab-a823-68c1c2013c76] Running
	I1205 07:18:36.428307   59006 system_pods.go:89] "kube-controller-manager-flannel-550303" [c55715b5-5479-4d3f-81e1-44a40234cf9c] Running
	I1205 07:18:36.428312   59006 system_pods.go:89] "kube-proxy-cl4z8" [7f1ff15f-1853-44df-8885-d99528fd197f] Running
	I1205 07:18:36.428318   59006 system_pods.go:89] "kube-scheduler-flannel-550303" [1c19495d-a875-4217-a91d-90f4f0a9467f] Running
	I1205 07:18:36.428325   59006 system_pods.go:89] "storage-provisioner" [8f1cefad-6018-4b0a-bdf3-09d7c0a7a473] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 07:18:36.428343   59006 retry.go:31] will retry after 456.879152ms: missing components: kube-dns
	I1205 07:18:36.891540   59006 system_pods.go:86] 7 kube-system pods found
	I1205 07:18:36.891581   59006 system_pods.go:89] "coredns-66bc5c9577-zl5kn" [14b24589-7009-4a9b-ac67-61706cd7e467] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:18:36.891591   59006 system_pods.go:89] "etcd-flannel-550303" [7b6a9d53-ca92-467f-b068-5a1afaea8690] Running
	I1205 07:18:36.891599   59006 system_pods.go:89] "kube-apiserver-flannel-550303" [be691a85-5e62-4aab-a823-68c1c2013c76] Running
	I1205 07:18:36.891605   59006 system_pods.go:89] "kube-controller-manager-flannel-550303" [c55715b5-5479-4d3f-81e1-44a40234cf9c] Running
	I1205 07:18:36.891610   59006 system_pods.go:89] "kube-proxy-cl4z8" [7f1ff15f-1853-44df-8885-d99528fd197f] Running
	I1205 07:18:36.891615   59006 system_pods.go:89] "kube-scheduler-flannel-550303" [1c19495d-a875-4217-a91d-90f4f0a9467f] Running
	I1205 07:18:36.891624   59006 system_pods.go:89] "storage-provisioner" [8f1cefad-6018-4b0a-bdf3-09d7c0a7a473] Running
	I1205 07:18:36.891778   59006 retry.go:31] will retry after 415.855633ms: missing components: kube-dns
	I1205 07:18:37.316652   59006 system_pods.go:86] 7 kube-system pods found
	I1205 07:18:37.316706   59006 system_pods.go:89] "coredns-66bc5c9577-zl5kn" [14b24589-7009-4a9b-ac67-61706cd7e467] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:18:37.316715   59006 system_pods.go:89] "etcd-flannel-550303" [7b6a9d53-ca92-467f-b068-5a1afaea8690] Running
	I1205 07:18:37.316724   59006 system_pods.go:89] "kube-apiserver-flannel-550303" [be691a85-5e62-4aab-a823-68c1c2013c76] Running
	I1205 07:18:37.316730   59006 system_pods.go:89] "kube-controller-manager-flannel-550303" [c55715b5-5479-4d3f-81e1-44a40234cf9c] Running
	I1205 07:18:37.316736   59006 system_pods.go:89] "kube-proxy-cl4z8" [7f1ff15f-1853-44df-8885-d99528fd197f] Running
	I1205 07:18:37.316742   59006 system_pods.go:89] "kube-scheduler-flannel-550303" [1c19495d-a875-4217-a91d-90f4f0a9467f] Running
	I1205 07:18:37.316747   59006 system_pods.go:89] "storage-provisioner" [8f1cefad-6018-4b0a-bdf3-09d7c0a7a473] Running
	I1205 07:18:37.316762   59006 retry.go:31] will retry after 577.023296ms: missing components: kube-dns
	I1205 07:18:37.899713   59006 system_pods.go:86] 7 kube-system pods found
	I1205 07:18:37.899744   59006 system_pods.go:89] "coredns-66bc5c9577-zl5kn" [14b24589-7009-4a9b-ac67-61706cd7e467] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:18:37.899751   59006 system_pods.go:89] "etcd-flannel-550303" [7b6a9d53-ca92-467f-b068-5a1afaea8690] Running
	I1205 07:18:37.899758   59006 system_pods.go:89] "kube-apiserver-flannel-550303" [be691a85-5e62-4aab-a823-68c1c2013c76] Running
	I1205 07:18:37.899763   59006 system_pods.go:89] "kube-controller-manager-flannel-550303" [c55715b5-5479-4d3f-81e1-44a40234cf9c] Running
	I1205 07:18:37.899767   59006 system_pods.go:89] "kube-proxy-cl4z8" [7f1ff15f-1853-44df-8885-d99528fd197f] Running
	I1205 07:18:37.899771   59006 system_pods.go:89] "kube-scheduler-flannel-550303" [1c19495d-a875-4217-a91d-90f4f0a9467f] Running
	I1205 07:18:37.899775   59006 system_pods.go:89] "storage-provisioner" [8f1cefad-6018-4b0a-bdf3-09d7c0a7a473] Running
	I1205 07:18:37.899790   59006 retry.go:31] will retry after 848.184459ms: missing components: kube-dns
	I1205 07:18:38.753430   59006 system_pods.go:86] 7 kube-system pods found
	I1205 07:18:38.753462   59006 system_pods.go:89] "coredns-66bc5c9577-zl5kn" [14b24589-7009-4a9b-ac67-61706cd7e467] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:18:38.753468   59006 system_pods.go:89] "etcd-flannel-550303" [7b6a9d53-ca92-467f-b068-5a1afaea8690] Running
	I1205 07:18:38.753475   59006 system_pods.go:89] "kube-apiserver-flannel-550303" [be691a85-5e62-4aab-a823-68c1c2013c76] Running
	I1205 07:18:38.753480   59006 system_pods.go:89] "kube-controller-manager-flannel-550303" [c55715b5-5479-4d3f-81e1-44a40234cf9c] Running
	I1205 07:18:38.753485   59006 system_pods.go:89] "kube-proxy-cl4z8" [7f1ff15f-1853-44df-8885-d99528fd197f] Running
	I1205 07:18:38.753490   59006 system_pods.go:89] "kube-scheduler-flannel-550303" [1c19495d-a875-4217-a91d-90f4f0a9467f] Running
	I1205 07:18:38.753494   59006 system_pods.go:89] "storage-provisioner" [8f1cefad-6018-4b0a-bdf3-09d7c0a7a473] Running
	I1205 07:18:38.753511   59006 retry.go:31] will retry after 1.181248488s: missing components: kube-dns
	I1205 07:18:39.940566   59006 system_pods.go:86] 7 kube-system pods found
	I1205 07:18:39.940609   59006 system_pods.go:89] "coredns-66bc5c9577-zl5kn" [14b24589-7009-4a9b-ac67-61706cd7e467] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:18:39.940617   59006 system_pods.go:89] "etcd-flannel-550303" [7b6a9d53-ca92-467f-b068-5a1afaea8690] Running
	I1205 07:18:39.940638   59006 system_pods.go:89] "kube-apiserver-flannel-550303" [be691a85-5e62-4aab-a823-68c1c2013c76] Running
	I1205 07:18:39.940645   59006 system_pods.go:89] "kube-controller-manager-flannel-550303" [c55715b5-5479-4d3f-81e1-44a40234cf9c] Running
	I1205 07:18:39.940650   59006 system_pods.go:89] "kube-proxy-cl4z8" [7f1ff15f-1853-44df-8885-d99528fd197f] Running
	I1205 07:18:39.940655   59006 system_pods.go:89] "kube-scheduler-flannel-550303" [1c19495d-a875-4217-a91d-90f4f0a9467f] Running
	I1205 07:18:39.940660   59006 system_pods.go:89] "storage-provisioner" [8f1cefad-6018-4b0a-bdf3-09d7c0a7a473] Running
	I1205 07:18:39.940679   59006 retry.go:31] will retry after 1.181028074s: missing components: kube-dns
	I1205 07:18:38.666451   60030 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.666256338s)
	I1205 07:18:38.666490   60030 crio.go:469] duration metric: took 1.666368766s to extract the tarball
	I1205 07:18:38.666504   60030 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 07:18:38.708157   60030 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:18:38.762310   60030 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 07:18:38.762335   60030 cache_images.go:86] Images are preloaded, skipping loading
	I1205 07:18:38.762342   60030 kubeadm.go:935] updating node { 192.168.72.252 8443 v1.34.2 crio true true} ...
	I1205 07:18:38.762440   60030 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-550303 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.252
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:bridge-550303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1205 07:18:38.762517   60030 ssh_runner.go:195] Run: crio config
	I1205 07:18:38.822191   60030 cni.go:84] Creating CNI manager for "bridge"
	I1205 07:18:38.822226   60030 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 07:18:38.822256   60030 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.252 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-550303 NodeName:bridge-550303 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.252"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.252 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:18:38.822402   60030 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.252
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-550303"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.252"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.252"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:18:38.822462   60030 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1205 07:18:38.838692   60030 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 07:18:38.838768   60030 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:18:38.851708   60030 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1205 07:18:38.876840   60030 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 07:18:38.905845   60030 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1205 07:18:38.932359   60030 ssh_runner.go:195] Run: grep 192.168.72.252	control-plane.minikube.internal$ /etc/hosts
	I1205 07:18:38.936669   60030 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.252	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:18:38.953759   60030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:18:39.114894   60030 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:18:39.149260   60030 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/bridge-550303 for IP: 192.168.72.252
	I1205 07:18:39.149282   60030 certs.go:195] generating shared ca certs ...
	I1205 07:18:39.149302   60030 certs.go:227] acquiring lock for ca certs: {Name:mk31e04487a5cf4ece02d9725a994239b98a3eba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:18:39.149475   60030 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-12744/.minikube/ca.key
	I1205 07:18:39.149524   60030 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-12744/.minikube/proxy-client-ca.key
	I1205 07:18:39.149535   60030 certs.go:257] generating profile certs ...
	I1205 07:18:39.149594   60030 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/bridge-550303/client.key
	I1205 07:18:39.149609   60030 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/bridge-550303/client.crt with IP's: []
	I1205 07:18:39.187525   60030 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/bridge-550303/client.crt ...
	I1205 07:18:39.187562   60030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/bridge-550303/client.crt: {Name:mk09e0434e011d14189fb78fa70d32d50e8e3a82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:18:39.187808   60030 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/bridge-550303/client.key ...
	I1205 07:18:39.187834   60030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/bridge-550303/client.key: {Name:mkea1b19e3ec10f30bd40aa90ac8eda6c581a28a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:18:39.187962   60030 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/bridge-550303/apiserver.key.19c59aa8
	I1205 07:18:39.187979   60030 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/bridge-550303/apiserver.crt.19c59aa8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.252]
	I1205 07:18:39.329563   60030 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/bridge-550303/apiserver.crt.19c59aa8 ...
	I1205 07:18:39.329599   60030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/bridge-550303/apiserver.crt.19c59aa8: {Name:mk681433c8d9bce42d80d64bbcd95f711af9f6cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:18:39.329830   60030 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/bridge-550303/apiserver.key.19c59aa8 ...
	I1205 07:18:39.329853   60030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/bridge-550303/apiserver.key.19c59aa8: {Name:mk4fe14f8faf16c75d872412822cc745f2f31962 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:18:39.329971   60030 certs.go:382] copying /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/bridge-550303/apiserver.crt.19c59aa8 -> /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/bridge-550303/apiserver.crt
	I1205 07:18:39.330069   60030 certs.go:386] copying /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/bridge-550303/apiserver.key.19c59aa8 -> /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/bridge-550303/apiserver.key
	I1205 07:18:39.330146   60030 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/bridge-550303/proxy-client.key
	I1205 07:18:39.330167   60030 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/bridge-550303/proxy-client.crt with IP's: []
	I1205 07:18:39.356336   60030 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/bridge-550303/proxy-client.crt ...
	I1205 07:18:39.356357   60030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/bridge-550303/proxy-client.crt: {Name:mk182e6b3de867b90fcb2d9f368dabc30433ca47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:18:39.356488   60030 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/bridge-550303/proxy-client.key ...
	I1205 07:18:39.356502   60030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/bridge-550303/proxy-client.key: {Name:mk18d1f99ec13e1dde18278857990d6b058ab664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:18:39.356770   60030 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12744/.minikube/certs/16702.pem (1338 bytes)
	W1205 07:18:39.356814   60030 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-12744/.minikube/certs/16702_empty.pem, impossibly tiny 0 bytes
	I1205 07:18:39.356825   60030 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12744/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 07:18:39.356864   60030 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12744/.minikube/certs/ca.pem (1078 bytes)
	I1205 07:18:39.356905   60030 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12744/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:18:39.356931   60030 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12744/.minikube/certs/key.pem (1675 bytes)
	I1205 07:18:39.356985   60030 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12744/.minikube/files/etc/ssl/certs/167022.pem (1708 bytes)
	I1205 07:18:39.357568   60030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:18:39.393198   60030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:18:39.423777   60030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:18:39.456770   60030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 07:18:39.488846   60030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/bridge-550303/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1205 07:18:39.521755   60030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/bridge-550303/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 07:18:39.556308   60030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/bridge-550303/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:18:39.590617   60030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/bridge-550303/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 07:18:39.623625   60030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:18:39.657678   60030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/certs/16702.pem --> /usr/share/ca-certificates/16702.pem (1338 bytes)
	I1205 07:18:39.691357   60030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/files/etc/ssl/certs/167022.pem --> /usr/share/ca-certificates/167022.pem (1708 bytes)
	I1205 07:18:39.724985   60030 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:18:39.749243   60030 ssh_runner.go:195] Run: openssl version
	I1205 07:18:39.757112   60030 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16702.pem
	I1205 07:18:39.769556   60030 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16702.pem /etc/ssl/certs/16702.pem
	I1205 07:18:39.782297   60030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16702.pem
	I1205 07:18:39.787875   60030 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:17 /usr/share/ca-certificates/16702.pem
	I1205 07:18:39.787941   60030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16702.pem
	I1205 07:18:39.795651   60030 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:18:39.808037   60030 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/16702.pem /etc/ssl/certs/51391683.0
	I1205 07:18:39.819831   60030 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/167022.pem
	I1205 07:18:39.835359   60030 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/167022.pem /etc/ssl/certs/167022.pem
	I1205 07:18:39.849383   60030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167022.pem
	I1205 07:18:39.857386   60030 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:17 /usr/share/ca-certificates/167022.pem
	I1205 07:18:39.857449   60030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167022.pem
	I1205 07:18:39.869844   60030 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:18:39.888082   60030 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/167022.pem /etc/ssl/certs/3ec20f2e.0
	I1205 07:18:39.906233   60030 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:18:39.922328   60030 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:18:39.936505   60030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:18:39.942645   60030 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:05 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:18:39.942728   60030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:18:39.951669   60030 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:18:39.966800   60030 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 07:18:39.980990   60030 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:18:39.986263   60030 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 07:18:39.986317   60030 kubeadm.go:401] StartCluster: {Name:bridge-550303 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:bridge-550303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.252 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:18:39.986378   60030 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 07:18:39.986441   60030 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:18:40.030187   60030 cri.go:89] found id: ""
	I1205 07:18:40.030256   60030 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:18:40.045601   60030 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 07:18:40.058743   60030 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:18:40.072383   60030 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:18:40.072408   60030 kubeadm.go:158] found existing configuration files:
	
	I1205 07:18:40.072461   60030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:18:40.088482   60030 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:18:40.088554   60030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:18:40.104767   60030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:18:40.116454   60030 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:18:40.116509   60030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:18:40.132477   60030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:18:40.145849   60030 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:18:40.145917   60030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:18:40.159852   60030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:18:40.171990   60030 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:18:40.172049   60030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:18:40.187242   60030 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 07:18:40.354961   60030 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:18:41.131624   59006 system_pods.go:86] 7 kube-system pods found
	I1205 07:18:41.131696   59006 system_pods.go:89] "coredns-66bc5c9577-zl5kn" [14b24589-7009-4a9b-ac67-61706cd7e467] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:18:41.131708   59006 system_pods.go:89] "etcd-flannel-550303" [7b6a9d53-ca92-467f-b068-5a1afaea8690] Running
	I1205 07:18:41.131721   59006 system_pods.go:89] "kube-apiserver-flannel-550303" [be691a85-5e62-4aab-a823-68c1c2013c76] Running
	I1205 07:18:41.131727   59006 system_pods.go:89] "kube-controller-manager-flannel-550303" [c55715b5-5479-4d3f-81e1-44a40234cf9c] Running
	I1205 07:18:41.131738   59006 system_pods.go:89] "kube-proxy-cl4z8" [7f1ff15f-1853-44df-8885-d99528fd197f] Running
	I1205 07:18:41.131746   59006 system_pods.go:89] "kube-scheduler-flannel-550303" [1c19495d-a875-4217-a91d-90f4f0a9467f] Running
	I1205 07:18:41.131754   59006 system_pods.go:89] "storage-provisioner" [8f1cefad-6018-4b0a-bdf3-09d7c0a7a473] Running
	I1205 07:18:41.131774   59006 retry.go:31] will retry after 1.36959881s: missing components: kube-dns
	I1205 07:18:42.506700   59006 system_pods.go:86] 7 kube-system pods found
	I1205 07:18:42.506735   59006 system_pods.go:89] "coredns-66bc5c9577-zl5kn" [14b24589-7009-4a9b-ac67-61706cd7e467] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:18:42.506743   59006 system_pods.go:89] "etcd-flannel-550303" [7b6a9d53-ca92-467f-b068-5a1afaea8690] Running
	I1205 07:18:42.506752   59006 system_pods.go:89] "kube-apiserver-flannel-550303" [be691a85-5e62-4aab-a823-68c1c2013c76] Running
	I1205 07:18:42.506759   59006 system_pods.go:89] "kube-controller-manager-flannel-550303" [c55715b5-5479-4d3f-81e1-44a40234cf9c] Running
	I1205 07:18:42.506765   59006 system_pods.go:89] "kube-proxy-cl4z8" [7f1ff15f-1853-44df-8885-d99528fd197f] Running
	I1205 07:18:42.506780   59006 system_pods.go:89] "kube-scheduler-flannel-550303" [1c19495d-a875-4217-a91d-90f4f0a9467f] Running
	I1205 07:18:42.506789   59006 system_pods.go:89] "storage-provisioner" [8f1cefad-6018-4b0a-bdf3-09d7c0a7a473] Running
	I1205 07:18:42.506805   59006 retry.go:31] will retry after 1.744559578s: missing components: kube-dns
	I1205 07:18:44.257965   59006 system_pods.go:86] 7 kube-system pods found
	I1205 07:18:44.258003   59006 system_pods.go:89] "coredns-66bc5c9577-zl5kn" [14b24589-7009-4a9b-ac67-61706cd7e467] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:18:44.258012   59006 system_pods.go:89] "etcd-flannel-550303" [7b6a9d53-ca92-467f-b068-5a1afaea8690] Running
	I1205 07:18:44.258020   59006 system_pods.go:89] "kube-apiserver-flannel-550303" [be691a85-5e62-4aab-a823-68c1c2013c76] Running
	I1205 07:18:44.258035   59006 system_pods.go:89] "kube-controller-manager-flannel-550303" [c55715b5-5479-4d3f-81e1-44a40234cf9c] Running
	I1205 07:18:44.258040   59006 system_pods.go:89] "kube-proxy-cl4z8" [7f1ff15f-1853-44df-8885-d99528fd197f] Running
	I1205 07:18:44.258045   59006 system_pods.go:89] "kube-scheduler-flannel-550303" [1c19495d-a875-4217-a91d-90f4f0a9467f] Running
	I1205 07:18:44.258050   59006 system_pods.go:89] "storage-provisioner" [8f1cefad-6018-4b0a-bdf3-09d7c0a7a473] Running
	I1205 07:18:44.258068   59006 retry.go:31] will retry after 2.258217927s: missing components: kube-dns
	I1205 07:18:46.522539   59006 system_pods.go:86] 7 kube-system pods found
	I1205 07:18:46.522573   59006 system_pods.go:89] "coredns-66bc5c9577-zl5kn" [14b24589-7009-4a9b-ac67-61706cd7e467] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:18:46.522587   59006 system_pods.go:89] "etcd-flannel-550303" [7b6a9d53-ca92-467f-b068-5a1afaea8690] Running
	I1205 07:18:46.522595   59006 system_pods.go:89] "kube-apiserver-flannel-550303" [be691a85-5e62-4aab-a823-68c1c2013c76] Running
	I1205 07:18:46.522603   59006 system_pods.go:89] "kube-controller-manager-flannel-550303" [c55715b5-5479-4d3f-81e1-44a40234cf9c] Running
	I1205 07:18:46.522608   59006 system_pods.go:89] "kube-proxy-cl4z8" [7f1ff15f-1853-44df-8885-d99528fd197f] Running
	I1205 07:18:46.522613   59006 system_pods.go:89] "kube-scheduler-flannel-550303" [1c19495d-a875-4217-a91d-90f4f0a9467f] Running
	I1205 07:18:46.522619   59006 system_pods.go:89] "storage-provisioner" [8f1cefad-6018-4b0a-bdf3-09d7c0a7a473] Running
	I1205 07:18:46.522635   59006 retry.go:31] will retry after 2.429061767s: missing components: kube-dns
	I1205 07:18:48.958553   59006 system_pods.go:86] 7 kube-system pods found
	I1205 07:18:48.958589   59006 system_pods.go:89] "coredns-66bc5c9577-zl5kn" [14b24589-7009-4a9b-ac67-61706cd7e467] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:18:48.958596   59006 system_pods.go:89] "etcd-flannel-550303" [7b6a9d53-ca92-467f-b068-5a1afaea8690] Running
	I1205 07:18:48.958604   59006 system_pods.go:89] "kube-apiserver-flannel-550303" [be691a85-5e62-4aab-a823-68c1c2013c76] Running
	I1205 07:18:48.958609   59006 system_pods.go:89] "kube-controller-manager-flannel-550303" [c55715b5-5479-4d3f-81e1-44a40234cf9c] Running
	I1205 07:18:48.958614   59006 system_pods.go:89] "kube-proxy-cl4z8" [7f1ff15f-1853-44df-8885-d99528fd197f] Running
	I1205 07:18:48.958620   59006 system_pods.go:89] "kube-scheduler-flannel-550303" [1c19495d-a875-4217-a91d-90f4f0a9467f] Running
	I1205 07:18:48.958624   59006 system_pods.go:89] "storage-provisioner" [8f1cefad-6018-4b0a-bdf3-09d7c0a7a473] Running
	I1205 07:18:48.958641   59006 retry.go:31] will retry after 3.163445288s: missing components: kube-dns
	I1205 07:18:52.963659   60030 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1205 07:18:52.963767   60030 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:18:52.963874   60030 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:18:52.964013   60030 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:18:52.964141   60030 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:18:52.964233   60030 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:18:52.965838   60030 out.go:252]   - Generating certificates and keys ...
	I1205 07:18:52.965917   60030 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:18:52.965983   60030 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:18:52.966080   60030 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 07:18:52.966167   60030 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 07:18:52.966228   60030 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 07:18:52.966292   60030 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 07:18:52.966379   60030 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 07:18:52.966534   60030 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [bridge-550303 localhost] and IPs [192.168.72.252 127.0.0.1 ::1]
	I1205 07:18:52.966624   60030 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 07:18:52.966834   60030 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [bridge-550303 localhost] and IPs [192.168.72.252 127.0.0.1 ::1]
	I1205 07:18:52.966942   60030 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 07:18:52.967040   60030 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 07:18:52.967106   60030 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 07:18:52.967192   60030 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:18:52.967269   60030 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:18:52.967349   60030 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:18:52.967426   60030 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:18:52.967513   60030 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:18:52.967585   60030 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:18:52.967715   60030 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:18:52.967809   60030 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:18:52.969779   60030 out.go:252]   - Booting up control plane ...
	I1205 07:18:52.969890   60030 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:18:52.970005   60030 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:18:52.970112   60030 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:18:52.970255   60030 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:18:52.970383   60030 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:18:52.970545   60030 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:18:52.970646   60030 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:18:52.970696   60030 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:18:52.970859   60030 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:18:52.970987   60030 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:18:52.971081   60030 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001675799s
	I1205 07:18:52.971163   60030 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1205 07:18:52.971243   60030 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.72.252:8443/livez
	I1205 07:18:52.971322   60030 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1205 07:18:52.971385   60030 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1205 07:18:52.971448   60030 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.537189301s
	I1205 07:18:52.971502   60030 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.243905287s
	I1205 07:18:52.971560   60030 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.503007157s
	I1205 07:18:52.971643   60030 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 07:18:52.971780   60030 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 07:18:52.971864   60030 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 07:18:52.972064   60030 kubeadm.go:319] [mark-control-plane] Marking the node bridge-550303 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 07:18:52.972126   60030 kubeadm.go:319] [bootstrap-token] Using token: h1i703.4ukfcts0x9nyz8qq
	I1205 07:18:52.973404   60030 out.go:252]   - Configuring RBAC rules ...
	I1205 07:18:52.973496   60030 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 07:18:52.973568   60030 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 07:18:52.973695   60030 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 07:18:52.973818   60030 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 07:18:52.973943   60030 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 07:18:52.974023   60030 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 07:18:52.974130   60030 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 07:18:52.974172   60030 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1205 07:18:52.974247   60030 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1205 07:18:52.974262   60030 kubeadm.go:319] 
	I1205 07:18:52.974355   60030 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1205 07:18:52.974370   60030 kubeadm.go:319] 
	I1205 07:18:52.974466   60030 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1205 07:18:52.974478   60030 kubeadm.go:319] 
	I1205 07:18:52.974516   60030 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1205 07:18:52.974609   60030 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 07:18:52.974703   60030 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 07:18:52.974716   60030 kubeadm.go:319] 
	I1205 07:18:52.974792   60030 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1205 07:18:52.974803   60030 kubeadm.go:319] 
	I1205 07:18:52.974888   60030 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 07:18:52.974903   60030 kubeadm.go:319] 
	I1205 07:18:52.974996   60030 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1205 07:18:52.975091   60030 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 07:18:52.975156   60030 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 07:18:52.975162   60030 kubeadm.go:319] 
	I1205 07:18:52.975232   60030 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 07:18:52.975301   60030 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1205 07:18:52.975308   60030 kubeadm.go:319] 
	I1205 07:18:52.975373   60030 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token h1i703.4ukfcts0x9nyz8qq \
	I1205 07:18:52.975465   60030 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2d0ac5ac5e7ca057303e4430ec89e40d74de77786c64de55c276a16d7451ec23 \
	I1205 07:18:52.975486   60030 kubeadm.go:319] 	--control-plane 
	I1205 07:18:52.975490   60030 kubeadm.go:319] 
	I1205 07:18:52.975564   60030 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1205 07:18:52.975576   60030 kubeadm.go:319] 
	I1205 07:18:52.975648   60030 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token h1i703.4ukfcts0x9nyz8qq \
	I1205 07:18:52.975760   60030 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2d0ac5ac5e7ca057303e4430ec89e40d74de77786c64de55c276a16d7451ec23 
	I1205 07:18:52.975781   60030 cni.go:84] Creating CNI manager for "bridge"
	I1205 07:18:52.977186   60030 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 07:18:52.978334   60030 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 07:18:52.998576   60030 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 07:18:53.022589   60030 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 07:18:53.022675   60030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:18:53.022720   60030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-550303 minikube.k8s.io/updated_at=2025_12_05T07_18_53_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45 minikube.k8s.io/name=bridge-550303 minikube.k8s.io/primary=true
	I1205 07:18:53.081090   60030 ops.go:34] apiserver oom_adj: -16
	I1205 07:18:53.198781   60030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:18:52.127251   59006 system_pods.go:86] 7 kube-system pods found
	I1205 07:18:52.127279   59006 system_pods.go:89] "coredns-66bc5c9577-zl5kn" [14b24589-7009-4a9b-ac67-61706cd7e467] Running
	I1205 07:18:52.127288   59006 system_pods.go:89] "etcd-flannel-550303" [7b6a9d53-ca92-467f-b068-5a1afaea8690] Running
	I1205 07:18:52.127292   59006 system_pods.go:89] "kube-apiserver-flannel-550303" [be691a85-5e62-4aab-a823-68c1c2013c76] Running
	I1205 07:18:52.127297   59006 system_pods.go:89] "kube-controller-manager-flannel-550303" [c55715b5-5479-4d3f-81e1-44a40234cf9c] Running
	I1205 07:18:52.127300   59006 system_pods.go:89] "kube-proxy-cl4z8" [7f1ff15f-1853-44df-8885-d99528fd197f] Running
	I1205 07:18:52.127303   59006 system_pods.go:89] "kube-scheduler-flannel-550303" [1c19495d-a875-4217-a91d-90f4f0a9467f] Running
	I1205 07:18:52.127306   59006 system_pods.go:89] "storage-provisioner" [8f1cefad-6018-4b0a-bdf3-09d7c0a7a473] Running
	I1205 07:18:52.127313   59006 system_pods.go:126] duration metric: took 16.319173967s to wait for k8s-apps to be running ...
	I1205 07:18:52.127319   59006 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 07:18:52.127363   59006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:18:52.153225   59006 system_svc.go:56] duration metric: took 25.894513ms WaitForService to wait for kubelet
	I1205 07:18:52.153261   59006 kubeadm.go:587] duration metric: took 23.016754499s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:18:52.153283   59006 node_conditions.go:102] verifying NodePressure condition ...
	I1205 07:18:52.157428   59006 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 07:18:52.157452   59006 node_conditions.go:123] node cpu capacity is 2
	I1205 07:18:52.157468   59006 node_conditions.go:105] duration metric: took 4.179864ms to run NodePressure ...
	I1205 07:18:52.157480   59006 start.go:242] waiting for startup goroutines ...
	I1205 07:18:52.157490   59006 start.go:247] waiting for cluster config update ...
	I1205 07:18:52.157502   59006 start.go:256] writing updated cluster config ...
	I1205 07:18:52.157832   59006 ssh_runner.go:195] Run: rm -f paused
	I1205 07:18:52.164367   59006 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:18:52.169317   59006 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zl5kn" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:18:52.175122   59006 pod_ready.go:94] pod "coredns-66bc5c9577-zl5kn" is "Ready"
	I1205 07:18:52.175148   59006 pod_ready.go:86] duration metric: took 5.793991ms for pod "coredns-66bc5c9577-zl5kn" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:18:52.180703   59006 pod_ready.go:83] waiting for pod "etcd-flannel-550303" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:18:52.187888   59006 pod_ready.go:94] pod "etcd-flannel-550303" is "Ready"
	I1205 07:18:52.187919   59006 pod_ready.go:86] duration metric: took 7.185967ms for pod "etcd-flannel-550303" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:18:52.190519   59006 pod_ready.go:83] waiting for pod "kube-apiserver-flannel-550303" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:18:52.196932   59006 pod_ready.go:94] pod "kube-apiserver-flannel-550303" is "Ready"
	I1205 07:18:52.196964   59006 pod_ready.go:86] duration metric: took 6.414641ms for pod "kube-apiserver-flannel-550303" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:18:52.199331   59006 pod_ready.go:83] waiting for pod "kube-controller-manager-flannel-550303" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:18:52.569730   59006 pod_ready.go:94] pod "kube-controller-manager-flannel-550303" is "Ready"
	I1205 07:18:52.569758   59006 pod_ready.go:86] duration metric: took 370.400826ms for pod "kube-controller-manager-flannel-550303" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:18:52.770089   59006 pod_ready.go:83] waiting for pod "kube-proxy-cl4z8" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:18:53.170385   59006 pod_ready.go:94] pod "kube-proxy-cl4z8" is "Ready"
	I1205 07:18:53.170416   59006 pod_ready.go:86] duration metric: took 400.291303ms for pod "kube-proxy-cl4z8" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:18:53.369514   59006 pod_ready.go:83] waiting for pod "kube-scheduler-flannel-550303" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:18:53.769122   59006 pod_ready.go:94] pod "kube-scheduler-flannel-550303" is "Ready"
	I1205 07:18:53.769149   59006 pod_ready.go:86] duration metric: took 399.605335ms for pod "kube-scheduler-flannel-550303" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:18:53.769162   59006 pod_ready.go:40] duration metric: took 1.604764123s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:18:53.817482   59006 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1205 07:18:53.819258   59006 out.go:179] * Done! kubectl is now configured to use "flannel-550303" cluster and "default" namespace by default
	I1205 07:18:56.340210   50912 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.61.103:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1205 07:18:56.340348   50912 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 07:18:56.342830   50912 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1205 07:18:56.342881   50912 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:18:56.342966   50912 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:18:56.343060   50912 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:18:56.343157   50912 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:18:56.343240   50912 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:18:56.347506   50912 out.go:252]   - Generating certificates and keys ...
	I1205 07:18:56.347604   50912 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:18:56.347720   50912 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:18:56.347830   50912 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 07:18:56.347911   50912 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 07:18:56.347983   50912 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 07:18:56.348038   50912 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 07:18:56.348094   50912 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 07:18:56.348155   50912 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 07:18:56.348221   50912 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 07:18:56.348314   50912 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 07:18:56.348368   50912 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 07:18:56.348455   50912 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:18:56.348515   50912 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:18:56.348589   50912 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:18:56.348663   50912 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:18:56.348769   50912 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:18:56.348855   50912 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:18:56.348963   50912 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:18:56.349018   50912 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:18:53.699583   60030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:18:54.198912   60030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:18:54.699598   60030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:18:55.198990   60030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:18:55.699795   60030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:18:56.199932   60030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:18:56.699907   60030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:18:57.199025   60030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:18:57.698863   60030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 07:18:57.851011   60030 kubeadm.go:1114] duration metric: took 4.828407443s to wait for elevateKubeSystemPrivileges
	I1205 07:18:57.851051   60030 kubeadm.go:403] duration metric: took 17.86473655s to StartCluster
	I1205 07:18:57.851075   60030 settings.go:142] acquiring lock: {Name:mk2f276bdecf61f8264687dd612372cc78cfacbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:18:57.851167   60030 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-12744/kubeconfig
	I1205 07:18:57.852303   60030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12744/kubeconfig: {Name:mka919c4eb7b6e761ae422db15b3daf8c8fde4d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:18:57.852564   60030 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 07:18:57.852574   60030 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.72.252 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 07:18:57.852652   60030 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 07:18:57.852752   60030 addons.go:70] Setting storage-provisioner=true in profile "bridge-550303"
	I1205 07:18:57.852769   60030 addons.go:239] Setting addon storage-provisioner=true in "bridge-550303"
	I1205 07:18:57.852770   60030 config.go:182] Loaded profile config "bridge-550303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:18:57.852800   60030 host.go:66] Checking if "bridge-550303" exists ...
	I1205 07:18:57.852806   60030 addons.go:70] Setting default-storageclass=true in profile "bridge-550303"
	I1205 07:18:57.852829   60030 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "bridge-550303"
	I1205 07:18:57.854740   60030 out.go:179] * Verifying Kubernetes components...
	I1205 07:18:57.856236   60030 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:18:57.856298   60030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:18:57.856505   60030 addons.go:239] Setting addon default-storageclass=true in "bridge-550303"
	I1205 07:18:57.856538   60030 host.go:66] Checking if "bridge-550303" exists ...
	I1205 07:18:57.857517   60030 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:18:57.857534   60030 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 07:18:57.858542   60030 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 07:18:57.858599   60030 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 07:18:57.861046   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:57.861611   60030 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:af:f9:26", ip: ""} in network mk-bridge-550303: {Iface:virbr4 ExpiryTime:2025-12-05 08:18:30 +0000 UTC Type:0 Mac:52:54:00:af:f9:26 Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:bridge-550303 Clientid:01:52:54:00:af:f9:26}
	I1205 07:18:57.861655   60030 main.go:143] libmachine: domain bridge-550303 has defined IP address 192.168.72.252 and MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:57.861877   60030 sshutil.go:53] new ssh client: &{IP:192.168.72.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/bridge-550303/id_rsa Username:docker}
	I1205 07:18:57.862064   60030 main.go:143] libmachine: domain bridge-550303 has defined MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:57.862526   60030 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:af:f9:26", ip: ""} in network mk-bridge-550303: {Iface:virbr4 ExpiryTime:2025-12-05 08:18:30 +0000 UTC Type:0 Mac:52:54:00:af:f9:26 Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:bridge-550303 Clientid:01:52:54:00:af:f9:26}
	I1205 07:18:57.862555   60030 main.go:143] libmachine: domain bridge-550303 has defined IP address 192.168.72.252 and MAC address 52:54:00:af:f9:26 in network mk-bridge-550303
	I1205 07:18:57.862767   60030 sshutil.go:53] new ssh client: &{IP:192.168.72.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/bridge-550303/id_rsa Username:docker}
	I1205 07:18:58.077696   60030 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 07:18:58.184080   60030 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:18:58.510481   60030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 07:18:58.516772   60030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 07:18:56.350651   50912 out.go:252]   - Booting up control plane ...
	I1205 07:18:56.350739   50912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:18:56.350808   50912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:18:56.350865   50912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:18:56.351023   50912 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:18:56.351151   50912 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:18:56.351277   50912 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:18:56.351397   50912 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:18:56.351443   50912 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:18:56.351590   50912 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:18:56.351768   50912 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:18:56.351853   50912 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001786275s
	I1205 07:18:56.352001   50912 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1205 07:18:56.352105   50912 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.61.103:8443/livez
	I1205 07:18:56.352177   50912 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1205 07:18:56.352286   50912 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1205 07:18:56.352382   50912 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.506740299s
	I1205 07:18:56.352458   50912 kubeadm.go:319] [control-plane-check] kube-apiserver is not healthy after 4m0.001076432s
	I1205 07:18:56.352566   50912 kubeadm.go:319] [control-plane-check] kube-controller-manager is not healthy after 4m0.000803206s
	I1205 07:18:56.352571   50912 kubeadm.go:319] 
	I1205 07:18:56.352713   50912 kubeadm.go:319] A control plane component may have crashed or exited when started by the container runtime.
	I1205 07:18:56.352793   50912 kubeadm.go:319] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 07:18:56.352878   50912 kubeadm.go:319] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1205 07:18:56.352949   50912 kubeadm.go:319] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 07:18:56.353028   50912 kubeadm.go:319] 	Once you have found the failing container, you can inspect its logs with:
	I1205 07:18:56.353144   50912 kubeadm.go:319] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1205 07:18:56.353175   50912 kubeadm.go:319] 
	W1205 07:18:56.353293   50912 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001786275s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.61.103:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is healthy after 1.506740299s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001076432s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000803206s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.61.103:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 07:18:56.353376   50912 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 07:18:57.822880   50912 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.46948151s)
	I1205 07:18:57.822950   50912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:18:57.843982   50912 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:18:57.863310   50912 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:18:57.863318   50912 kubeadm.go:158] found existing configuration files:
	
	I1205 07:18:57.863357   50912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:18:57.880047   50912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:18:57.880114   50912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:18:57.897670   50912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:18:57.913784   50912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:18:57.913843   50912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:18:57.934273   50912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:18:57.947712   50912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:18:57.947758   50912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:18:57.960305   50912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:18:57.972105   50912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:18:57.972162   50912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:18:57.984971   50912 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 07:18:58.146826   50912 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:18:58.996422   60030 start.go:977] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1205 07:18:58.997295   60030 node_ready.go:35] waiting up to 15m0s for node "bridge-550303" to be "Ready" ...
	I1205 07:18:59.010230   60030 node_ready.go:49] node "bridge-550303" is "Ready"
	I1205 07:18:59.010260   60030 node_ready.go:38] duration metric: took 12.942344ms for node "bridge-550303" to be "Ready" ...
	I1205 07:18:59.010276   60030 api_server.go:52] waiting for apiserver process to appear ...
	I1205 07:18:59.010329   60030 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:18:59.311317   60030 api_server.go:72] duration metric: took 1.458711062s to wait for apiserver process to appear ...
	I1205 07:18:59.311350   60030 api_server.go:88] waiting for apiserver healthz status ...
	I1205 07:18:59.311372   60030 api_server.go:253] Checking apiserver healthz at https://192.168.72.252:8443/healthz ...
	I1205 07:18:59.328110   60030 api_server.go:279] https://192.168.72.252:8443/healthz returned 200:
	ok
	I1205 07:18:59.329840   60030 api_server.go:141] control plane version: v1.34.2
	I1205 07:18:59.329871   60030 api_server.go:131] duration metric: took 18.513492ms to wait for apiserver health ...
	I1205 07:18:59.329883   60030 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 07:18:59.331571   60030 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1205 07:18:59.333985   60030 system_pods.go:59] 8 kube-system pods found
	I1205 07:18:59.334019   60030 system_pods.go:61] "coredns-66bc5c9577-d8fgk" [42ee3372-2cb2-4846-bc7f-4cf74a9765e6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:18:59.334027   60030 system_pods.go:61] "coredns-66bc5c9577-zwmz4" [cabb97d4-fe82-479e-9a0c-1e546e4def10] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:18:59.334036   60030 system_pods.go:61] "etcd-bridge-550303" [7491624b-bb4b-4215-9561-cccb1e37e4ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 07:18:59.334040   60030 system_pods.go:61] "kube-apiserver-bridge-550303" [1f67a264-6811-4047-8938-a957badf45f0] Running
	I1205 07:18:59.334049   60030 system_pods.go:61] "kube-controller-manager-bridge-550303" [0f3fe843-4713-48b3-a1e4-57fb182b2c40] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 07:18:59.334056   60030 system_pods.go:61] "kube-proxy-2sr72" [5ba86e77-812d-447d-bb1b-d6630542ca07] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 07:18:59.334060   60030 system_pods.go:61] "kube-scheduler-bridge-550303" [974cb570-56ef-42a4-9f57-92bccc42445c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:18:59.334068   60030 system_pods.go:61] "storage-provisioner" [944963ea-3a7f-4ec6-af9f-aa47a35b1e0f] Pending
	I1205 07:18:59.334078   60030 system_pods.go:74] duration metric: took 4.188503ms to wait for pod list to return data ...
	I1205 07:18:59.334085   60030 default_sa.go:34] waiting for default service account to be created ...
	I1205 07:18:59.334480   60030 addons.go:530] duration metric: took 1.481822866s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1205 07:18:59.342342   60030 default_sa.go:45] found service account: "default"
	I1205 07:18:59.342371   60030 default_sa.go:55] duration metric: took 8.280572ms for default service account to be created ...
	I1205 07:18:59.342383   60030 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 07:18:59.346246   60030 system_pods.go:86] 8 kube-system pods found
	I1205 07:18:59.346283   60030 system_pods.go:89] "coredns-66bc5c9577-d8fgk" [42ee3372-2cb2-4846-bc7f-4cf74a9765e6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:18:59.346295   60030 system_pods.go:89] "coredns-66bc5c9577-zwmz4" [cabb97d4-fe82-479e-9a0c-1e546e4def10] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:18:59.346304   60030 system_pods.go:89] "etcd-bridge-550303" [7491624b-bb4b-4215-9561-cccb1e37e4ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 07:18:59.346310   60030 system_pods.go:89] "kube-apiserver-bridge-550303" [1f67a264-6811-4047-8938-a957badf45f0] Running
	I1205 07:18:59.346319   60030 system_pods.go:89] "kube-controller-manager-bridge-550303" [0f3fe843-4713-48b3-a1e4-57fb182b2c40] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 07:18:59.346327   60030 system_pods.go:89] "kube-proxy-2sr72" [5ba86e77-812d-447d-bb1b-d6630542ca07] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 07:18:59.346335   60030 system_pods.go:89] "kube-scheduler-bridge-550303" [974cb570-56ef-42a4-9f57-92bccc42445c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:18:59.346344   60030 system_pods.go:89] "storage-provisioner" [944963ea-3a7f-4ec6-af9f-aa47a35b1e0f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 07:18:59.346396   60030 retry.go:31] will retry after 239.270793ms: missing components: kube-dns, kube-proxy
	I1205 07:18:59.507294   60030 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-550303" context rescaled to 1 replicas
	I1205 07:18:59.591785   60030 system_pods.go:86] 8 kube-system pods found
	I1205 07:18:59.591824   60030 system_pods.go:89] "coredns-66bc5c9577-d8fgk" [42ee3372-2cb2-4846-bc7f-4cf74a9765e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:18:59.591836   60030 system_pods.go:89] "coredns-66bc5c9577-zwmz4" [cabb97d4-fe82-479e-9a0c-1e546e4def10] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:18:59.591845   60030 system_pods.go:89] "etcd-bridge-550303" [7491624b-bb4b-4215-9561-cccb1e37e4ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 07:18:59.591852   60030 system_pods.go:89] "kube-apiserver-bridge-550303" [1f67a264-6811-4047-8938-a957badf45f0] Running
	I1205 07:18:59.591861   60030 system_pods.go:89] "kube-controller-manager-bridge-550303" [0f3fe843-4713-48b3-a1e4-57fb182b2c40] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 07:18:59.591867   60030 system_pods.go:89] "kube-proxy-2sr72" [5ba86e77-812d-447d-bb1b-d6630542ca07] Running
	I1205 07:18:59.591877   60030 system_pods.go:89] "kube-scheduler-bridge-550303" [974cb570-56ef-42a4-9f57-92bccc42445c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:18:59.591888   60030 system_pods.go:89] "storage-provisioner" [944963ea-3a7f-4ec6-af9f-aa47a35b1e0f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 07:18:59.591898   60030 system_pods.go:126] duration metric: took 249.508426ms to wait for k8s-apps to be running ...
	I1205 07:18:59.591917   60030 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 07:18:59.591968   60030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:18:59.618989   60030 system_svc.go:56] duration metric: took 27.062912ms WaitForService to wait for kubelet
	I1205 07:18:59.619016   60030 kubeadm.go:587] duration metric: took 1.766417125s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:18:59.619032   60030 node_conditions.go:102] verifying NodePressure condition ...
	I1205 07:18:59.627057   60030 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 07:18:59.627089   60030 node_conditions.go:123] node cpu capacity is 2
	I1205 07:18:59.627102   60030 node_conditions.go:105] duration metric: took 8.066331ms to run NodePressure ...
	I1205 07:18:59.627114   60030 start.go:242] waiting for startup goroutines ...
	I1205 07:18:59.627121   60030 start.go:247] waiting for cluster config update ...
	I1205 07:18:59.627130   60030 start.go:256] writing updated cluster config ...
	I1205 07:18:59.627357   60030 ssh_runner.go:195] Run: rm -f paused
	I1205 07:18:59.645753   60030 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:18:59.655367   60030 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-d8fgk" in "kube-system" namespace to be "Ready" or be gone ...
	W1205 07:19:01.662737   60030 pod_ready.go:104] pod "coredns-66bc5c9577-d8fgk" is not "Ready", error: <nil>
	W1205 07:19:03.663123   60030 pod_ready.go:104] pod "coredns-66bc5c9577-d8fgk" is not "Ready", error: <nil>
	W1205 07:19:06.162170   60030 pod_ready.go:104] pod "coredns-66bc5c9577-d8fgk" is not "Ready", error: <nil>
	W1205 07:19:08.662595   60030 pod_ready.go:104] pod "coredns-66bc5c9577-d8fgk" is not "Ready", error: <nil>
	I1205 07:19:09.658837   60030 pod_ready.go:99] pod "coredns-66bc5c9577-d8fgk" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-d8fgk" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-d8fgk" not found
	I1205 07:19:09.658865   60030 pod_ready.go:86] duration metric: took 10.003468058s for pod "coredns-66bc5c9577-d8fgk" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:19:09.658885   60030 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zwmz4" in "kube-system" namespace to be "Ready" or be gone ...
	W1205 07:19:11.664849   60030 pod_ready.go:104] pod "coredns-66bc5c9577-zwmz4" is not "Ready", error: <nil>
	W1205 07:19:13.665165   60030 pod_ready.go:104] pod "coredns-66bc5c9577-zwmz4" is not "Ready", error: <nil>
	W1205 07:19:16.166122   60030 pod_ready.go:104] pod "coredns-66bc5c9577-zwmz4" is not "Ready", error: <nil>
	W1205 07:19:18.665543   60030 pod_ready.go:104] pod "coredns-66bc5c9577-zwmz4" is not "Ready", error: <nil>
	W1205 07:19:21.166179   60030 pod_ready.go:104] pod "coredns-66bc5c9577-zwmz4" is not "Ready", error: <nil>
	W1205 07:19:23.665636   60030 pod_ready.go:104] pod "coredns-66bc5c9577-zwmz4" is not "Ready", error: <nil>
	W1205 07:19:26.164656   60030 pod_ready.go:104] pod "coredns-66bc5c9577-zwmz4" is not "Ready", error: <nil>
	W1205 07:19:28.665655   60030 pod_ready.go:104] pod "coredns-66bc5c9577-zwmz4" is not "Ready", error: <nil>
	I1205 07:19:30.664346   60030 pod_ready.go:94] pod "coredns-66bc5c9577-zwmz4" is "Ready"
	I1205 07:19:30.664374   60030 pod_ready.go:86] duration metric: took 21.005483653s for pod "coredns-66bc5c9577-zwmz4" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:19:30.667233   60030 pod_ready.go:83] waiting for pod "etcd-bridge-550303" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:19:30.672543   60030 pod_ready.go:94] pod "etcd-bridge-550303" is "Ready"
	I1205 07:19:30.672563   60030 pod_ready.go:86] duration metric: took 5.304344ms for pod "etcd-bridge-550303" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:19:30.674715   60030 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-550303" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:19:30.679452   60030 pod_ready.go:94] pod "kube-apiserver-bridge-550303" is "Ready"
	I1205 07:19:30.679477   60030 pod_ready.go:86] duration metric: took 4.737082ms for pod "kube-apiserver-bridge-550303" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:19:30.681906   60030 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-550303" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:19:30.862074   60030 pod_ready.go:94] pod "kube-controller-manager-bridge-550303" is "Ready"
	I1205 07:19:30.862109   60030 pod_ready.go:86] duration metric: took 180.180514ms for pod "kube-controller-manager-bridge-550303" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:19:31.062291   60030 pod_ready.go:83] waiting for pod "kube-proxy-2sr72" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:19:31.463524   60030 pod_ready.go:94] pod "kube-proxy-2sr72" is "Ready"
	I1205 07:19:31.463549   60030 pod_ready.go:86] duration metric: took 401.228722ms for pod "kube-proxy-2sr72" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:19:31.663907   60030 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-550303" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:19:32.065169   60030 pod_ready.go:94] pod "kube-scheduler-bridge-550303" is "Ready"
	I1205 07:19:32.065203   60030 pod_ready.go:86] duration metric: took 401.274465ms for pod "kube-scheduler-bridge-550303" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:19:32.065215   60030 pod_ready.go:40] duration metric: took 32.419422687s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:19:32.108501   60030 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1205 07:19:32.110195   60030 out.go:179] * Done! kubectl is now configured to use "bridge-550303" cluster and "default" namespace by default
	I1205 07:23:00.490181   50912 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.61.103:8443/livez: Get "https://192.168.61.103:8443/livez?timeout=10s": dial tcp 192.168.61.103:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1205 07:23:00.490303   50912 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 07:23:00.492904   50912 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1205 07:23:00.492957   50912 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:23:00.493043   50912 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:23:00.493132   50912 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:23:00.493209   50912 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:23:00.493262   50912 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:23:00.495179   50912 out.go:252]   - Generating certificates and keys ...
	I1205 07:23:00.495241   50912 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:23:00.495288   50912 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:23:00.495365   50912 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 07:23:00.495414   50912 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 07:23:00.495466   50912 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 07:23:00.495506   50912 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 07:23:00.495552   50912 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 07:23:00.495597   50912 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 07:23:00.495661   50912 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 07:23:00.495734   50912 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 07:23:00.495772   50912 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 07:23:00.495813   50912 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:23:00.495858   50912 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:23:00.495905   50912 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:23:00.495952   50912 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:23:00.496002   50912 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:23:00.496073   50912 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:23:00.496149   50912 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:23:00.496199   50912 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:23:00.497712   50912 out.go:252]   - Booting up control plane ...
	I1205 07:23:00.497772   50912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:23:00.497830   50912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:23:00.497881   50912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:23:00.497958   50912 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:23:00.498032   50912 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:23:00.498139   50912 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:23:00.498214   50912 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:23:00.498246   50912 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:23:00.498349   50912 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:23:00.498438   50912 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:23:00.498485   50912 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002213396s
	I1205 07:23:00.498556   50912 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1205 07:23:00.498631   50912 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.61.103:8443/livez
	I1205 07:23:00.498737   50912 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1205 07:23:00.498810   50912 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1205 07:23:00.498869   50912 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.506451689s
	I1205 07:23:00.498927   50912 kubeadm.go:319] [control-plane-check] kube-apiserver is not healthy after 4m0.000800234s
	I1205 07:23:00.498995   50912 kubeadm.go:319] [control-plane-check] kube-controller-manager is not healthy after 4m0.00080655s
	I1205 07:23:00.498998   50912 kubeadm.go:319] 
	I1205 07:23:00.499074   50912 kubeadm.go:319] A control plane component may have crashed or exited when started by the container runtime.
	I1205 07:23:00.499154   50912 kubeadm.go:319] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 07:23:00.499231   50912 kubeadm.go:319] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1205 07:23:00.499307   50912 kubeadm.go:319] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 07:23:00.499367   50912 kubeadm.go:319] 	Once you have found the failing container, you can inspect its logs with:
	I1205 07:23:00.499431   50912 kubeadm.go:319] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1205 07:23:00.499452   50912 kubeadm.go:319] 
	I1205 07:23:00.499486   50912 kubeadm.go:403] duration metric: took 12m14.922160212s to StartCluster
	I1205 07:23:00.499518   50912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:23:00.499564   50912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:23:00.536518   50912 cri.go:89] found id: ""
	I1205 07:23:00.536530   50912 logs.go:282] 0 containers: []
	W1205 07:23:00.536536   50912 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:23:00.536541   50912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:23:00.536588   50912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:23:00.567805   50912 cri.go:89] found id: "bd3418faf118347fa20d92acb095ce12187dd245fe8da6f20c69ae3c21eb7890"
	I1205 07:23:00.567822   50912 cri.go:89] found id: ""
	I1205 07:23:00.567829   50912 logs.go:282] 1 containers: [bd3418faf118347fa20d92acb095ce12187dd245fe8da6f20c69ae3c21eb7890]
	I1205 07:23:00.567893   50912 ssh_runner.go:195] Run: which crictl
	I1205 07:23:00.572642   50912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:23:00.572739   50912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:23:00.603440   50912 cri.go:89] found id: ""
	I1205 07:23:00.603455   50912 logs.go:282] 0 containers: []
	W1205 07:23:00.603464   50912 logs.go:284] No container was found matching "coredns"
	I1205 07:23:00.603469   50912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:23:00.603529   50912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:23:00.635012   50912 cri.go:89] found id: "ff3607331547cf3ac6251b301ac6f5e6b5e93a6b7ad5d6d707c3bcd34d7f2c08"
	I1205 07:23:00.635027   50912 cri.go:89] found id: ""
	I1205 07:23:00.635045   50912 logs.go:282] 1 containers: [ff3607331547cf3ac6251b301ac6f5e6b5e93a6b7ad5d6d707c3bcd34d7f2c08]
	I1205 07:23:00.635101   50912 ssh_runner.go:195] Run: which crictl
	I1205 07:23:00.639381   50912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:23:00.639447   50912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:23:00.672798   50912 cri.go:89] found id: ""
	I1205 07:23:00.672810   50912 logs.go:282] 0 containers: []
	W1205 07:23:00.672816   50912 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:23:00.672820   50912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:23:00.672876   50912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:23:00.704543   50912 cri.go:89] found id: ""
	I1205 07:23:00.704560   50912 logs.go:282] 0 containers: []
	W1205 07:23:00.704568   50912 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:23:00.704573   50912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:23:00.704634   50912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:23:00.736543   50912 cri.go:89] found id: ""
	I1205 07:23:00.736555   50912 logs.go:282] 0 containers: []
	W1205 07:23:00.736561   50912 logs.go:284] No container was found matching "kindnet"
	I1205 07:23:00.736566   50912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:23:00.736611   50912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:23:00.770576   50912 cri.go:89] found id: ""
	I1205 07:23:00.770589   50912 logs.go:282] 0 containers: []
	W1205 07:23:00.770594   50912 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:23:00.770601   50912 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:23:00.770610   50912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:23:00.842785   50912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:23:00.842798   50912 logs.go:123] Gathering logs for etcd [bd3418faf118347fa20d92acb095ce12187dd245fe8da6f20c69ae3c21eb7890] ...
	I1205 07:23:00.842810   50912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd3418faf118347fa20d92acb095ce12187dd245fe8da6f20c69ae3c21eb7890"
	I1205 07:23:00.880269   50912 logs.go:123] Gathering logs for kube-scheduler [ff3607331547cf3ac6251b301ac6f5e6b5e93a6b7ad5d6d707c3bcd34d7f2c08] ...
	I1205 07:23:00.880284   50912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff3607331547cf3ac6251b301ac6f5e6b5e93a6b7ad5d6d707c3bcd34d7f2c08"
	I1205 07:23:00.940628   50912 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:23:00.940644   50912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:23:01.158648   50912 logs.go:123] Gathering logs for container status ...
	I1205 07:23:01.158666   50912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:23:01.198117   50912 logs.go:123] Gathering logs for kubelet ...
	I1205 07:23:01.198132   50912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:23:01.321953   50912 logs.go:123] Gathering logs for dmesg ...
	I1205 07:23:01.321970   50912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 07:23:01.337811   50912 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.002213396s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.61.103:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is healthy after 1.506451689s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000800234s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00080655s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.61.103:8443/livez: Get "https://192.168.61.103:8443/livez?timeout=10s": dial tcp 192.168.61.103:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1205 07:23:01.337882   50912 out.go:285] * 
	W1205 07:23:01.337978   50912 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.002213396s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.61.103:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is healthy after 1.506451689s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000800234s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00080655s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.61.103:8443/livez: Get "https://192.168.61.103:8443/livez?timeout=10s": dial tcp 192.168.61.103:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:23:01.337995   50912 out.go:285] * 
	W1205 07:23:01.339757   50912 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 07:23:01.343083   50912 out.go:203] 
	W1205 07:23:01.344501   50912 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.002213396s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.61.103:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is healthy after 1.506451689s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000800234s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00080655s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.61.103:8443/livez: Get "https://192.168.61.103:8443/livez?timeout=10s": dial tcp 192.168.61.103:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:23:01.344520   50912 out.go:285] * 
	I1205 07:23:01.346565   50912 out.go:203] 
	
	
	==> CRI-O <==
	Dec 05 07:23:01 cert-expiration-809455 crio[3341]: time="2025-12-05 07:23:01.899921185Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764919381899897362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6e6fe441-5d00-497a-9b7c-c1ae61348fcd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 07:23:01 cert-expiration-809455 crio[3341]: time="2025-12-05 07:23:01.901024721Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eb87d244-3c6d-4266-9a45-9fd9677c2481 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 07:23:01 cert-expiration-809455 crio[3341]: time="2025-12-05 07:23:01.901110771Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eb87d244-3c6d-4266-9a45-9fd9677c2481 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 07:23:01 cert-expiration-809455 crio[3341]: time="2025-12-05 07:23:01.901242274Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff3607331547cf3ac6251b301ac6f5e6b5e93a6b7ad5d6d707c3bcd34d7f2c08,PodSandboxId:e40e54a773a62ee79ac2a134a29b6a3bbc494d9dcd7d4d600f1992cb922c515f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1764919140794873811,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-cert-expiration-809455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8874af50bdacc2698a8ef5ecb4c5d3b,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"prob
e-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3418faf118347fa20d92acb095ce12187dd245fe8da6f20c69ae3c21eb7890,PodSandboxId:b9a19db1a5df0a4739a60a8e0b5dde1f37746318afc30ecdcd7fc01adf3a6c84,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1764919140766120967,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-cert-expiration-809455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffabcd5819a3f52edfe2821a43fe03d3,},Annotations:map[string]string{io
.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eb87d244-3c6d-4266-9a45-9fd9677c2481 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 07:23:01 cert-expiration-809455 crio[3341]: time="2025-12-05 07:23:01.929909245Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=589b1060-9ab2-4b02-81a7-1fecddc273e3 name=/runtime.v1.RuntimeService/Version
	Dec 05 07:23:01 cert-expiration-809455 crio[3341]: time="2025-12-05 07:23:01.929997588Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=589b1060-9ab2-4b02-81a7-1fecddc273e3 name=/runtime.v1.RuntimeService/Version
	Dec 05 07:23:01 cert-expiration-809455 crio[3341]: time="2025-12-05 07:23:01.931107330Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ff5bdf0a-16a2-416d-b98c-b49d1c40e61a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 07:23:01 cert-expiration-809455 crio[3341]: time="2025-12-05 07:23:01.931522011Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764919381931499157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ff5bdf0a-16a2-416d-b98c-b49d1c40e61a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 07:23:01 cert-expiration-809455 crio[3341]: time="2025-12-05 07:23:01.932567608Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df2a8a70-5f1e-49bc-b8d6-3caee0e1316a name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 07:23:01 cert-expiration-809455 crio[3341]: time="2025-12-05 07:23:01.932702905Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df2a8a70-5f1e-49bc-b8d6-3caee0e1316a name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 07:23:01 cert-expiration-809455 crio[3341]: time="2025-12-05 07:23:01.932999506Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff3607331547cf3ac6251b301ac6f5e6b5e93a6b7ad5d6d707c3bcd34d7f2c08,PodSandboxId:e40e54a773a62ee79ac2a134a29b6a3bbc494d9dcd7d4d600f1992cb922c515f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1764919140794873811,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-cert-expiration-809455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8874af50bdacc2698a8ef5ecb4c5d3b,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"prob
e-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3418faf118347fa20d92acb095ce12187dd245fe8da6f20c69ae3c21eb7890,PodSandboxId:b9a19db1a5df0a4739a60a8e0b5dde1f37746318afc30ecdcd7fc01adf3a6c84,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1764919140766120967,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-cert-expiration-809455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffabcd5819a3f52edfe2821a43fe03d3,},Annotations:map[string]string{io
.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df2a8a70-5f1e-49bc-b8d6-3caee0e1316a name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 07:23:01 cert-expiration-809455 crio[3341]: time="2025-12-05 07:23:01.962820293Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3368ba81-92cc-426b-9e75-49fe99552889 name=/runtime.v1.RuntimeService/Version
	Dec 05 07:23:01 cert-expiration-809455 crio[3341]: time="2025-12-05 07:23:01.962911075Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3368ba81-92cc-426b-9e75-49fe99552889 name=/runtime.v1.RuntimeService/Version
	Dec 05 07:23:01 cert-expiration-809455 crio[3341]: time="2025-12-05 07:23:01.964246825Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e399da74-ed9a-483d-88a4-314e3a819548 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 07:23:01 cert-expiration-809455 crio[3341]: time="2025-12-05 07:23:01.964668721Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764919381964643523,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e399da74-ed9a-483d-88a4-314e3a819548 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 07:23:01 cert-expiration-809455 crio[3341]: time="2025-12-05 07:23:01.965539900Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e951f1b4-273e-4c27-9d92-d50fa6ef23cf name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 07:23:01 cert-expiration-809455 crio[3341]: time="2025-12-05 07:23:01.965606821Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e951f1b4-273e-4c27-9d92-d50fa6ef23cf name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 07:23:01 cert-expiration-809455 crio[3341]: time="2025-12-05 07:23:01.965677722Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff3607331547cf3ac6251b301ac6f5e6b5e93a6b7ad5d6d707c3bcd34d7f2c08,PodSandboxId:e40e54a773a62ee79ac2a134a29b6a3bbc494d9dcd7d4d600f1992cb922c515f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1764919140794873811,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-cert-expiration-809455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8874af50bdacc2698a8ef5ecb4c5d3b,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"prob
e-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3418faf118347fa20d92acb095ce12187dd245fe8da6f20c69ae3c21eb7890,PodSandboxId:b9a19db1a5df0a4739a60a8e0b5dde1f37746318afc30ecdcd7fc01adf3a6c84,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1764919140766120967,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-cert-expiration-809455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffabcd5819a3f52edfe2821a43fe03d3,},Annotations:map[string]string{io
.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e951f1b4-273e-4c27-9d92-d50fa6ef23cf name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 07:23:01 cert-expiration-809455 crio[3341]: time="2025-12-05 07:23:01.995919039Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eb86baf8-dd0f-47bd-ace4-b262ed723486 name=/runtime.v1.RuntimeService/Version
	Dec 05 07:23:01 cert-expiration-809455 crio[3341]: time="2025-12-05 07:23:01.995993833Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eb86baf8-dd0f-47bd-ace4-b262ed723486 name=/runtime.v1.RuntimeService/Version
	Dec 05 07:23:01 cert-expiration-809455 crio[3341]: time="2025-12-05 07:23:01.996981082Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a04756a5-80d3-40d4-ad6a-33a84c0e45d3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 07:23:01 cert-expiration-809455 crio[3341]: time="2025-12-05 07:23:01.997433261Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764919381997408918,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a04756a5-80d3-40d4-ad6a-33a84c0e45d3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 07:23:01 cert-expiration-809455 crio[3341]: time="2025-12-05 07:23:01.998500479Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=05cbc087-cd69-4168-b61f-ba4758ac161b name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 07:23:01 cert-expiration-809455 crio[3341]: time="2025-12-05 07:23:01.998552357Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=05cbc087-cd69-4168-b61f-ba4758ac161b name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 07:23:01 cert-expiration-809455 crio[3341]: time="2025-12-05 07:23:01.998637041Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff3607331547cf3ac6251b301ac6f5e6b5e93a6b7ad5d6d707c3bcd34d7f2c08,PodSandboxId:e40e54a773a62ee79ac2a134a29b6a3bbc494d9dcd7d4d600f1992cb922c515f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1764919140794873811,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-cert-expiration-809455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8874af50bdacc2698a8ef5ecb4c5d3b,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"prob
e-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3418faf118347fa20d92acb095ce12187dd245fe8da6f20c69ae3c21eb7890,PodSandboxId:b9a19db1a5df0a4739a60a8e0b5dde1f37746318afc30ecdcd7fc01adf3a6c84,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1764919140766120967,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-cert-expiration-809455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffabcd5819a3f52edfe2821a43fe03d3,},Annotations:map[string]string{io
.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=05cbc087-cd69-4168-b61f-ba4758ac161b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                ATTEMPT             POD ID              POD                                     NAMESPACE
	ff3607331547c       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   4 minutes ago       Running             kube-scheduler      4                   e40e54a773a62       kube-scheduler-cert-expiration-809455   kube-system
	bd3418faf1183       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   4 minutes ago       Running             etcd                4                   b9a19db1a5df0       etcd-cert-expiration-809455             kube-system
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 07:05] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001387] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.003757] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.194557] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000024] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.085807] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.098874] kauditd_printk_skb: 130 callbacks suppressed
	[Dec 5 07:06] kauditd_printk_skb: 171 callbacks suppressed
	[  +4.419778] kauditd_printk_skb: 18 callbacks suppressed
	[ +26.872148] kauditd_printk_skb: 198 callbacks suppressed
	[Dec 5 07:09] kauditd_printk_skb: 5 callbacks suppressed
	[Dec 5 07:10] kauditd_printk_skb: 315 callbacks suppressed
	[Dec 5 07:11] kauditd_printk_skb: 154 callbacks suppressed
	[Dec 5 07:14] kauditd_printk_skb: 51 callbacks suppressed
	[Dec 5 07:18] kauditd_printk_skb: 80 callbacks suppressed
	
	
	==> etcd [bd3418faf118347fa20d92acb095ce12187dd245fe8da6f20c69ae3c21eb7890] <==
	{"level":"info","ts":"2025-12-05T07:19:01.342276Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"b591d44136cef39c is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-05T07:19:01.342330Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"b591d44136cef39c became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-05T07:19:01.342395Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"b591d44136cef39c received MsgPreVoteResp from b591d44136cef39c at term 1"}
	{"level":"info","ts":"2025-12-05T07:19:01.342431Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"b591d44136cef39c has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-05T07:19:01.342447Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"b591d44136cef39c became candidate at term 2"}
	{"level":"info","ts":"2025-12-05T07:19:01.344930Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"b591d44136cef39c received MsgVoteResp from b591d44136cef39c at term 2"}
	{"level":"info","ts":"2025-12-05T07:19:01.344979Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"b591d44136cef39c has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-05T07:19:01.345000Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"b591d44136cef39c became leader at term 2"}
	{"level":"info","ts":"2025-12-05T07:19:01.345008Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: b591d44136cef39c elected leader b591d44136cef39c at term 2"}
	{"level":"info","ts":"2025-12-05T07:19:01.346299Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-05T07:19:01.347356Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"b591d44136cef39c","local-member-attributes":"{Name:cert-expiration-809455 ClientURLs:[https://192.168.61.103:2379]}","cluster-id":"491377e215684224","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-05T07:19:01.347621Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-05T07:19:01.347793Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-05T07:19:01.347859Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-05T07:19:01.347868Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-05T07:19:01.348181Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"491377e215684224","local-member-id":"b591d44136cef39c","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-05T07:19:01.348278Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-05T07:19:01.348301Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-05T07:19:01.348331Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-05T07:19:01.348391Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-05T07:19:01.349073Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"warn","ts":"2025-12-05T07:19:01.349661Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
	{"level":"info","ts":"2025-12-05T07:19:01.349744Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-05T07:19:01.352857Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-05T07:19:01.353600Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.103:2379"}
	
	
	==> kernel <==
	 07:23:02 up 17 min,  0 users,  load average: 0.25, 0.23, 0.14
	Linux cert-expiration-809455 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-scheduler [ff3607331547cf3ac6251b301ac6f5e6b5e93a6b7ad5d6d707c3bcd34d7f2c08] <==
	E1205 07:21:57.505382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.61.103:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.61.103:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1205 07:22:01.334096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.61.103:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.61.103:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1205 07:22:03.714032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.61.103:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.61.103:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1205 07:22:09.484780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.61.103:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.61.103:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1205 07:22:12.264551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.61.103:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.61.103:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1205 07:22:16.648661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.61.103:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.61.103:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1205 07:22:17.401649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.61.103:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.61.103:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1205 07:22:18.654479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.61.103:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.61.103:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1205 07:22:24.689570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.61.103:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.61.103:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1205 07:22:25.635518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.61.103:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.61.103:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1205 07:22:28.409603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.61.103:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.61.103:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1205 07:22:28.463519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.61.103:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.61.103:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1205 07:22:29.212412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.61.103:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.61.103:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1205 07:22:34.922902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.61.103:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.61.103:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1205 07:22:36.573362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.61.103:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.61.103:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1205 07:22:37.250137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.61.103:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.61.103:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1205 07:22:37.604479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.61.103:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.61.103:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1205 07:22:37.883650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.61.103:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.61.103:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1205 07:22:38.385790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.61.103:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.61.103:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1205 07:22:41.361054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.61.103:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.61.103:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1205 07:22:51.547965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.61.103:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.61.103:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1205 07:22:55.559601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.61.103:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.61.103:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1205 07:22:56.840605       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.61.103:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.61.103:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1205 07:23:00.833923       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.61.103:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.61.103:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1205 07:23:01.675715       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.61.103:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.61.103:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	
	
	==> kubelet <==
	Dec 05 07:22:49 cert-expiration-809455 kubelet[12934]: E1205 07:22:49.092903   12934 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-apiserver start failed in pod kube-apiserver-cert-expiration-809455_kube-system(79f08547e00b8097296ac428a3508f09): CreateContainerError: the container name \"k8s_kube-apiserver_kube-apiserver-cert-expiration-809455_kube-system_79f08547e00b8097296ac428a3508f09_1\" is already in use by df20da0487103eaa0029115638ce5694a446f9d58c40f74aa22adbf1720ee960. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Dec 05 07:22:49 cert-expiration-809455 kubelet[12934]: E1205 07:22:49.092931   12934 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"the container name \\\"k8s_kube-apiserver_kube-apiserver-cert-expiration-809455_kube-system_79f08547e00b8097296ac428a3508f09_1\\\" is already in use by df20da0487103eaa0029115638ce5694a446f9d58c40f74aa22adbf1720ee960. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-apiserver-cert-expiration-809455" podUID="79f08547e00b8097296ac428a3508f09"
	Dec 05 07:22:49 cert-expiration-809455 kubelet[12934]: E1205 07:22:49.722192   12934 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.61.103:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/cert-expiration-809455?timeout=10s\": dial tcp 192.168.61.103:8443: connect: connection refused" interval="7s"
	Dec 05 07:22:49 cert-expiration-809455 kubelet[12934]: I1205 07:22:49.936591   12934 kubelet_node_status.go:75] "Attempting to register node" node="cert-expiration-809455"
	Dec 05 07:22:49 cert-expiration-809455 kubelet[12934]: E1205 07:22:49.936938   12934 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.61.103:8443/api/v1/nodes\": dial tcp 192.168.61.103:8443: connect: connection refused" node="cert-expiration-809455"
	Dec 05 07:22:50 cert-expiration-809455 kubelet[12934]: E1205 07:22:50.083473   12934 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"cert-expiration-809455\" not found" node="cert-expiration-809455"
	Dec 05 07:22:50 cert-expiration-809455 kubelet[12934]: E1205 07:22:50.092275   12934 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-controller-manager_kube-controller-manager-cert-expiration-809455_kube-system_904fc47de672b02808ee344cfb3ee7d9_1\" is already in use by 182562b64c8df1f4bc11121bf2da614ea2b30afbe41ce310bedb6b60db6b8b34. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="fb6d8a0eaf49343b9b95c2685d95f80c1a11008f14b550b4111c9edf3dec1ffd"
	Dec 05 07:22:50 cert-expiration-809455 kubelet[12934]: E1205 07:22:50.092334   12934 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-controller-manager start failed in pod kube-controller-manager-cert-expiration-809455_kube-system(904fc47de672b02808ee344cfb3ee7d9): CreateContainerError: the container name \"k8s_kube-controller-manager_kube-controller-manager-cert-expiration-809455_kube-system_904fc47de672b02808ee344cfb3ee7d9_1\" is already in use by 182562b64c8df1f4bc11121bf2da614ea2b30afbe41ce310bedb6b60db6b8b34. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Dec 05 07:22:50 cert-expiration-809455 kubelet[12934]: E1205 07:22:50.092363   12934 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"the container name \\\"k8s_kube-controller-manager_kube-controller-manager-cert-expiration-809455_kube-system_904fc47de672b02808ee344cfb3ee7d9_1\\\" is already in use by 182562b64c8df1f4bc11121bf2da614ea2b30afbe41ce310bedb6b60db6b8b34. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-controller-manager-cert-expiration-809455" podUID="904fc47de672b02808ee344cfb3ee7d9"
	Dec 05 07:22:50 cert-expiration-809455 kubelet[12934]: E1205 07:22:50.181465   12934 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764919370181139610 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 05 07:22:50 cert-expiration-809455 kubelet[12934]: E1205 07:22:50.181488   12934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764919370181139610 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 05 07:22:52 cert-expiration-809455 kubelet[12934]: E1205 07:22:52.082445   12934 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"cert-expiration-809455\" not found" node="cert-expiration-809455"
	Dec 05 07:22:55 cert-expiration-809455 kubelet[12934]: E1205 07:22:55.202743   12934 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.61.103:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.61.103:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Dec 05 07:22:55 cert-expiration-809455 kubelet[12934]: E1205 07:22:55.351345   12934 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.61.103:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.61.103:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Dec 05 07:22:56 cert-expiration-809455 kubelet[12934]: E1205 07:22:56.722868   12934 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.61.103:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/cert-expiration-809455?timeout=10s\": dial tcp 192.168.61.103:8443: connect: connection refused" interval="7s"
	Dec 05 07:22:56 cert-expiration-809455 kubelet[12934]: I1205 07:22:56.939138   12934 kubelet_node_status.go:75] "Attempting to register node" node="cert-expiration-809455"
	Dec 05 07:22:56 cert-expiration-809455 kubelet[12934]: E1205 07:22:56.939643   12934 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.61.103:8443/api/v1/nodes\": dial tcp 192.168.61.103:8443: connect: connection refused" node="cert-expiration-809455"
	Dec 05 07:22:57 cert-expiration-809455 kubelet[12934]: E1205 07:22:57.958981   12934 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.61.103:8443/api/v1/namespaces/default/events\": dial tcp 192.168.61.103:8443: connect: connection refused" event="&Event{ObjectMeta:{cert-expiration-809455.187e409b644bbc67  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:cert-expiration-809455,UID:cert-expiration-809455,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node cert-expiration-809455 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:cert-expiration-809455,},FirstTimestamp:2025-12-05 07:19:00.122606695 +0000 UTC m=+0.667196898,LastTimestamp:2025-12-05 07:19:00.122606695 +0000 UTC m=+0.667196898,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingI
nstance:cert-expiration-809455,}"
	Dec 05 07:23:00 cert-expiration-809455 kubelet[12934]: E1205 07:23:00.182818   12934 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764919380182546091 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 05 07:23:00 cert-expiration-809455 kubelet[12934]: E1205 07:23:00.182849   12934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764919380182546091 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 05 07:23:02 cert-expiration-809455 kubelet[12934]: E1205 07:23:02.085841   12934 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"cert-expiration-809455\" not found" node="cert-expiration-809455"
	Dec 05 07:23:02 cert-expiration-809455 kubelet[12934]: E1205 07:23:02.099766   12934 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-controller-manager_kube-controller-manager-cert-expiration-809455_kube-system_904fc47de672b02808ee344cfb3ee7d9_1\" is already in use by 182562b64c8df1f4bc11121bf2da614ea2b30afbe41ce310bedb6b60db6b8b34. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="fb6d8a0eaf49343b9b95c2685d95f80c1a11008f14b550b4111c9edf3dec1ffd"
	Dec 05 07:23:02 cert-expiration-809455 kubelet[12934]: E1205 07:23:02.099859   12934 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-controller-manager start failed in pod kube-controller-manager-cert-expiration-809455_kube-system(904fc47de672b02808ee344cfb3ee7d9): CreateContainerError: the container name \"k8s_kube-controller-manager_kube-controller-manager-cert-expiration-809455_kube-system_904fc47de672b02808ee344cfb3ee7d9_1\" is already in use by 182562b64c8df1f4bc11121bf2da614ea2b30afbe41ce310bedb6b60db6b8b34. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Dec 05 07:23:02 cert-expiration-809455 kubelet[12934]: E1205 07:23:02.099892   12934 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"the container name \\\"k8s_kube-controller-manager_kube-controller-manager-cert-expiration-809455_kube-system_904fc47de672b02808ee344cfb3ee7d9_1\\\" is already in use by 182562b64c8df1f4bc11121bf2da614ea2b30afbe41ce310bedb6b60db6b8b34. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-controller-manager-cert-expiration-809455" podUID="904fc47de672b02808ee344cfb3ee7d9"
	Dec 05 07:23:02 cert-expiration-809455 kubelet[12934]: E1205 07:23:02.224936   12934 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.61.103:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.61.103:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p cert-expiration-809455 -n cert-expiration-809455
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p cert-expiration-809455 -n cert-expiration-809455: exit status 2 (193.301235ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "cert-expiration-809455" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-809455" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-809455
--- FAIL: TestCertExpiration (1074.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-158571 image ls --format short --alsologtostderr: (2.231936915s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-158571 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-158571 image ls --format short --alsologtostderr:
I1205 06:17:15.143891   23321 out.go:360] Setting OutFile to fd 1 ...
I1205 06:17:15.144005   23321 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:17:15.144015   23321 out.go:374] Setting ErrFile to fd 2...
I1205 06:17:15.144021   23321 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:17:15.144216   23321 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12744/.minikube/bin
I1205 06:17:15.144759   23321 config.go:182] Loaded profile config "functional-158571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1205 06:17:15.144883   23321 config.go:182] Loaded profile config "functional-158571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1205 06:17:15.147080   23321 ssh_runner.go:195] Run: systemctl --version
I1205 06:17:15.149948   23321 main.go:143] libmachine: domain functional-158571 has defined MAC address 52:54:00:b0:54:27 in network mk-functional-158571
I1205 06:17:15.150407   23321 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b0:54:27", ip: ""} in network mk-functional-158571: {Iface:virbr1 ExpiryTime:2025-12-05 07:14:17 +0000 UTC Type:0 Mac:52:54:00:b0:54:27 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:functional-158571 Clientid:01:52:54:00:b0:54:27}
I1205 06:17:15.150432   23321 main.go:143] libmachine: domain functional-158571 has defined IP address 192.168.39.7 and MAC address 52:54:00:b0:54:27 in network mk-functional-158571
I1205 06:17:15.150566   23321 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/functional-158571/id_rsa Username:docker}
I1205 06:17:15.238167   23321 ssh_runner.go:195] Run: sudo crictl images --output json
I1205 06:17:17.289734   23321 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.051515031s)
W1205 06:17:17.289844   23321 cache_images.go:736] Failed to list images for profile functional-158571 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1205 06:17:17.278474    9301 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="image:{}"
time="2025-12-05T06:17:17Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
functional_test.go:290: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-158571 ssh pgrep buildkitd: exit status 1 (202.006628ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 image build -t localhost/my-image:functional-158571 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-158571 image build -t localhost/my-image:functional-158571 testdata/build --alsologtostderr: (4.201161972s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-158571 image build -t localhost/my-image:functional-158571 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 77c3eaaf425
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-158571
--> 78a83a27258
Successfully tagged localhost/my-image:functional-158571
78a83a27258588324f911c853abe24de0a9f7874b10575a7961fe4b000ce862c
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-158571 image build -t localhost/my-image:functional-158571 testdata/build --alsologtostderr:
I1205 06:17:19.201093   23389 out.go:360] Setting OutFile to fd 1 ...
I1205 06:17:19.201493   23389 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:17:19.201508   23389 out.go:374] Setting ErrFile to fd 2...
I1205 06:17:19.201515   23389 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:17:19.201869   23389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12744/.minikube/bin
I1205 06:17:19.202699   23389 config.go:182] Loaded profile config "functional-158571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1205 06:17:19.203500   23389 config.go:182] Loaded profile config "functional-158571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1205 06:17:19.205614   23389 ssh_runner.go:195] Run: systemctl --version
I1205 06:17:19.207653   23389 main.go:143] libmachine: domain functional-158571 has defined MAC address 52:54:00:b0:54:27 in network mk-functional-158571
I1205 06:17:19.208094   23389 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b0:54:27", ip: ""} in network mk-functional-158571: {Iface:virbr1 ExpiryTime:2025-12-05 07:14:17 +0000 UTC Type:0 Mac:52:54:00:b0:54:27 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:functional-158571 Clientid:01:52:54:00:b0:54:27}
I1205 06:17:19.208120   23389 main.go:143] libmachine: domain functional-158571 has defined IP address 192.168.39.7 and MAC address 52:54:00:b0:54:27 in network mk-functional-158571
I1205 06:17:19.208264   23389 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/functional-158571/id_rsa Username:docker}
I1205 06:17:19.297557   23389 build_images.go:162] Building image from path: /tmp/build.2160745911.tar
I1205 06:17:19.297635   23389 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1205 06:17:19.309625   23389 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2160745911.tar
I1205 06:17:19.314476   23389 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2160745911.tar: stat -c "%s %y" /var/lib/minikube/build/build.2160745911.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2160745911.tar': No such file or directory
I1205 06:17:19.314515   23389 ssh_runner.go:362] scp /tmp/build.2160745911.tar --> /var/lib/minikube/build/build.2160745911.tar (3072 bytes)
I1205 06:17:19.352143   23389 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2160745911
I1205 06:17:19.364034   23389 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2160745911 -xf /var/lib/minikube/build/build.2160745911.tar
I1205 06:17:19.377560   23389 crio.go:315] Building image: /var/lib/minikube/build/build.2160745911
I1205 06:17:19.377640   23389 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-158571 /var/lib/minikube/build/build.2160745911 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1205 06:17:23.264959   23389 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-158571 /var/lib/minikube/build/build.2160745911 --cgroup-manager=cgroupfs: (3.887289518s)
I1205 06:17:23.265038   23389 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2160745911
I1205 06:17:23.296197   23389 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2160745911.tar
I1205 06:17:23.323139   23389 build_images.go:218] Built localhost/my-image:functional-158571 from /tmp/build.2160745911.tar
I1205 06:17:23.323179   23389 build_images.go:134] succeeded building to: functional-158571
I1205 06:17:23.323185   23389 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-158571 image ls: (2.422359357s)
functional_test.go:461: expected "localhost/my-image:functional-158571" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (6.83s)

                                                
                                    
x
+
TestPreload (117.06s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-755077 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio
E1205 06:58:27.901638   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-755077 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio: (1m0.8426834s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-755077 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-755077 image pull gcr.io/k8s-minikube/busybox: (3.437963326s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-755077
E1205 06:58:40.546626   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-895947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-755077: (6.755302086s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-755077 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-755077 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (43.42677765s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-755077 image list
preload_test.go:73: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20250512-df8de77b

                                                
                                                
-- /stdout --
panic.go:615: *** TestPreload FAILED at 2025-12-05 06:59:27.367302728 +0000 UTC m=+3273.527001498
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-755077 -n test-preload-755077
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-755077 logs -n 25
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-916216 ssh -n multinode-916216-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-916216     │ jenkins │ v1.37.0 │ 05 Dec 25 06:47 UTC │ 05 Dec 25 06:47 UTC │
	│ ssh     │ multinode-916216 ssh -n multinode-916216 sudo cat /home/docker/cp-test_multinode-916216-m03_multinode-916216.txt                                          │ multinode-916216     │ jenkins │ v1.37.0 │ 05 Dec 25 06:47 UTC │ 05 Dec 25 06:47 UTC │
	│ cp      │ multinode-916216 cp multinode-916216-m03:/home/docker/cp-test.txt multinode-916216-m02:/home/docker/cp-test_multinode-916216-m03_multinode-916216-m02.txt │ multinode-916216     │ jenkins │ v1.37.0 │ 05 Dec 25 06:47 UTC │ 05 Dec 25 06:47 UTC │
	│ ssh     │ multinode-916216 ssh -n multinode-916216-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-916216     │ jenkins │ v1.37.0 │ 05 Dec 25 06:47 UTC │ 05 Dec 25 06:47 UTC │
	│ ssh     │ multinode-916216 ssh -n multinode-916216-m02 sudo cat /home/docker/cp-test_multinode-916216-m03_multinode-916216-m02.txt                                  │ multinode-916216     │ jenkins │ v1.37.0 │ 05 Dec 25 06:47 UTC │ 05 Dec 25 06:47 UTC │
	│ node    │ multinode-916216 node stop m03                                                                                                                            │ multinode-916216     │ jenkins │ v1.37.0 │ 05 Dec 25 06:47 UTC │ 05 Dec 25 06:47 UTC │
	│ node    │ multinode-916216 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-916216     │ jenkins │ v1.37.0 │ 05 Dec 25 06:47 UTC │ 05 Dec 25 06:47 UTC │
	│ node    │ list -p multinode-916216                                                                                                                                  │ multinode-916216     │ jenkins │ v1.37.0 │ 05 Dec 25 06:47 UTC │                     │
	│ stop    │ -p multinode-916216                                                                                                                                       │ multinode-916216     │ jenkins │ v1.37.0 │ 05 Dec 25 06:47 UTC │ 05 Dec 25 06:50 UTC │
	│ start   │ -p multinode-916216 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-916216     │ jenkins │ v1.37.0 │ 05 Dec 25 06:50 UTC │ 05 Dec 25 06:52 UTC │
	│ node    │ list -p multinode-916216                                                                                                                                  │ multinode-916216     │ jenkins │ v1.37.0 │ 05 Dec 25 06:52 UTC │                     │
	│ node    │ multinode-916216 node delete m03                                                                                                                          │ multinode-916216     │ jenkins │ v1.37.0 │ 05 Dec 25 06:52 UTC │ 05 Dec 25 06:52 UTC │
	│ stop    │ multinode-916216 stop                                                                                                                                     │ multinode-916216     │ jenkins │ v1.37.0 │ 05 Dec 25 06:52 UTC │ 05 Dec 25 06:55 UTC │
	│ start   │ -p multinode-916216 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-916216     │ jenkins │ v1.37.0 │ 05 Dec 25 06:55 UTC │ 05 Dec 25 06:56 UTC │
	│ node    │ list -p multinode-916216                                                                                                                                  │ multinode-916216     │ jenkins │ v1.37.0 │ 05 Dec 25 06:56 UTC │                     │
	│ start   │ -p multinode-916216-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-916216-m02 │ jenkins │ v1.37.0 │ 05 Dec 25 06:56 UTC │                     │
	│ start   │ -p multinode-916216-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-916216-m03 │ jenkins │ v1.37.0 │ 05 Dec 25 06:56 UTC │ 05 Dec 25 06:57 UTC │
	│ node    │ add -p multinode-916216                                                                                                                                   │ multinode-916216     │ jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │                     │
	│ delete  │ -p multinode-916216-m03                                                                                                                                   │ multinode-916216-m03 │ jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │ 05 Dec 25 06:57 UTC │
	│ delete  │ -p multinode-916216                                                                                                                                       │ multinode-916216     │ jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │ 05 Dec 25 06:57 UTC │
	│ start   │ -p test-preload-755077 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio                                │ test-preload-755077  │ jenkins │ v1.37.0 │ 05 Dec 25 06:57 UTC │ 05 Dec 25 06:58 UTC │
	│ image   │ test-preload-755077 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-755077  │ jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ stop    │ -p test-preload-755077                                                                                                                                    │ test-preload-755077  │ jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ start   │ -p test-preload-755077 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                          │ test-preload-755077  │ jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:59 UTC │
	│ image   │ test-preload-755077 image list                                                                                                                            │ test-preload-755077  │ jenkins │ v1.37.0 │ 05 Dec 25 06:59 UTC │ 05 Dec 25 06:59 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:58:43
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:58:43.803757   42195 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:58:43.804020   42195 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:58:43.804030   42195 out.go:374] Setting ErrFile to fd 2...
	I1205 06:58:43.804034   42195 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:58:43.804244   42195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12744/.minikube/bin
	I1205 06:58:43.804640   42195 out.go:368] Setting JSON to false
	I1205 06:58:43.805549   42195 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6069,"bootTime":1764911855,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 06:58:43.805603   42195 start.go:143] virtualization: kvm guest
	I1205 06:58:43.807787   42195 out.go:179] * [test-preload-755077] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 06:58:43.808875   42195 notify.go:221] Checking for updates...
	I1205 06:58:43.808914   42195 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:58:43.810206   42195 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:58:43.811527   42195 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12744/kubeconfig
	I1205 06:58:43.812721   42195 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12744/.minikube
	I1205 06:58:43.814079   42195 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 06:58:43.815458   42195 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:58:43.817013   42195 config.go:182] Loaded profile config "test-preload-755077": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:58:43.817464   42195 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:58:43.849720   42195 out.go:179] * Using the kvm2 driver based on existing profile
	I1205 06:58:43.850932   42195 start.go:309] selected driver: kvm2
	I1205 06:58:43.850945   42195 start.go:927] validating driver "kvm2" against &{Name:test-preload-755077 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.34.2 ClusterName:test-preload-755077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:58:43.851041   42195 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:58:43.851996   42195 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 06:58:43.852023   42195 cni.go:84] Creating CNI manager for ""
	I1205 06:58:43.852071   42195 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 06:58:43.852126   42195 start.go:353] cluster config:
	{Name:test-preload-755077 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:test-preload-755077 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:58:43.852221   42195 iso.go:125] acquiring lock: {Name:mk8940d2199650f8674488213bff178b8d82a626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:58:43.853801   42195 out.go:179] * Starting "test-preload-755077" primary control-plane node in "test-preload-755077" cluster
	I1205 06:58:43.854810   42195 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 06:58:43.854834   42195 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-12744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1205 06:58:43.854840   42195 cache.go:65] Caching tarball of preloaded images
	I1205 06:58:43.854963   42195 preload.go:238] Found /home/jenkins/minikube-integration/21997-12744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 06:58:43.854979   42195 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1205 06:58:43.855073   42195 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/test-preload-755077/config.json ...
	I1205 06:58:43.855262   42195 start.go:360] acquireMachinesLock for test-preload-755077: {Name:mk6f885ffa3cca5ad53a733e47a4c8f74f8579b1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 06:58:43.855305   42195 start.go:364] duration metric: took 24.935µs to acquireMachinesLock for "test-preload-755077"
	I1205 06:58:43.855323   42195 start.go:96] Skipping create...Using existing machine configuration
	I1205 06:58:43.855331   42195 fix.go:54] fixHost starting: 
	I1205 06:58:43.856854   42195 fix.go:112] recreateIfNeeded on test-preload-755077: state=Stopped err=<nil>
	W1205 06:58:43.856881   42195 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 06:58:43.858316   42195 out.go:252] * Restarting existing kvm2 VM for "test-preload-755077" ...
	I1205 06:58:43.858342   42195 main.go:143] libmachine: starting domain...
	I1205 06:58:43.858349   42195 main.go:143] libmachine: ensuring networks are active...
	I1205 06:58:43.859087   42195 main.go:143] libmachine: Ensuring network default is active
	I1205 06:58:43.859499   42195 main.go:143] libmachine: Ensuring network mk-test-preload-755077 is active
	I1205 06:58:43.859865   42195 main.go:143] libmachine: getting domain XML...
	I1205 06:58:43.860864   42195 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-755077</name>
	  <uuid>e263e86f-e330-41a7-bdab-1b93aa8d904c</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21997-12744/.minikube/machines/test-preload-755077/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21997-12744/.minikube/machines/test-preload-755077/test-preload-755077.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:ef:ff:0e'/>
	      <source network='mk-test-preload-755077'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:89:dc:6e'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1205 06:58:45.100386   42195 main.go:143] libmachine: waiting for domain to start...
	I1205 06:58:45.101675   42195 main.go:143] libmachine: domain is now running
	I1205 06:58:45.101702   42195 main.go:143] libmachine: waiting for IP...
	I1205 06:58:45.102420   42195 main.go:143] libmachine: domain test-preload-755077 has defined MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:58:45.102921   42195 main.go:143] libmachine: domain test-preload-755077 has current primary IP address 192.168.39.198 and MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:58:45.102934   42195 main.go:143] libmachine: found domain IP: 192.168.39.198
	I1205 06:58:45.102939   42195 main.go:143] libmachine: reserving static IP address...
	I1205 06:58:45.103242   42195 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-755077", mac: "52:54:00:ef:ff:0e", ip: "192.168.39.198"} in network mk-test-preload-755077: {Iface:virbr1 ExpiryTime:2025-12-05 07:57:47 +0000 UTC Type:0 Mac:52:54:00:ef:ff:0e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:test-preload-755077 Clientid:01:52:54:00:ef:ff:0e}
	I1205 06:58:45.103261   42195 main.go:143] libmachine: skip adding static IP to network mk-test-preload-755077 - found existing host DHCP lease matching {name: "test-preload-755077", mac: "52:54:00:ef:ff:0e", ip: "192.168.39.198"}
	I1205 06:58:45.103269   42195 main.go:143] libmachine: reserved static IP address 192.168.39.198 for domain test-preload-755077
	I1205 06:58:45.103276   42195 main.go:143] libmachine: waiting for SSH...
	I1205 06:58:45.103282   42195 main.go:143] libmachine: Getting to WaitForSSH function...
	I1205 06:58:45.105293   42195 main.go:143] libmachine: domain test-preload-755077 has defined MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:58:45.105613   42195 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ef:ff:0e", ip: ""} in network mk-test-preload-755077: {Iface:virbr1 ExpiryTime:2025-12-05 07:57:47 +0000 UTC Type:0 Mac:52:54:00:ef:ff:0e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:test-preload-755077 Clientid:01:52:54:00:ef:ff:0e}
	I1205 06:58:45.105631   42195 main.go:143] libmachine: domain test-preload-755077 has defined IP address 192.168.39.198 and MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:58:45.105785   42195 main.go:143] libmachine: Using SSH client type: native
	I1205 06:58:45.106000   42195 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1205 06:58:45.106009   42195 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1205 06:58:48.171950   42195 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.198:22: connect: no route to host
	I1205 06:58:54.252079   42195 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.198:22: connect: no route to host
	I1205 06:58:57.371295   42195 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:58:57.374883   42195 main.go:143] libmachine: domain test-preload-755077 has defined MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:58:57.375336   42195 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ef:ff:0e", ip: ""} in network mk-test-preload-755077: {Iface:virbr1 ExpiryTime:2025-12-05 07:58:55 +0000 UTC Type:0 Mac:52:54:00:ef:ff:0e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:test-preload-755077 Clientid:01:52:54:00:ef:ff:0e}
	I1205 06:58:57.375363   42195 main.go:143] libmachine: domain test-preload-755077 has defined IP address 192.168.39.198 and MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:58:57.375588   42195 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/test-preload-755077/config.json ...
	I1205 06:58:57.375800   42195 machine.go:94] provisionDockerMachine start ...
	I1205 06:58:57.378049   42195 main.go:143] libmachine: domain test-preload-755077 has defined MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:58:57.378359   42195 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ef:ff:0e", ip: ""} in network mk-test-preload-755077: {Iface:virbr1 ExpiryTime:2025-12-05 07:58:55 +0000 UTC Type:0 Mac:52:54:00:ef:ff:0e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:test-preload-755077 Clientid:01:52:54:00:ef:ff:0e}
	I1205 06:58:57.378396   42195 main.go:143] libmachine: domain test-preload-755077 has defined IP address 192.168.39.198 and MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:58:57.378559   42195 main.go:143] libmachine: Using SSH client type: native
	I1205 06:58:57.378753   42195 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1205 06:58:57.378763   42195 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:58:57.490084   42195 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 06:58:57.490110   42195 buildroot.go:166] provisioning hostname "test-preload-755077"
	I1205 06:58:57.493156   42195 main.go:143] libmachine: domain test-preload-755077 has defined MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:58:57.493609   42195 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ef:ff:0e", ip: ""} in network mk-test-preload-755077: {Iface:virbr1 ExpiryTime:2025-12-05 07:58:55 +0000 UTC Type:0 Mac:52:54:00:ef:ff:0e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:test-preload-755077 Clientid:01:52:54:00:ef:ff:0e}
	I1205 06:58:57.493640   42195 main.go:143] libmachine: domain test-preload-755077 has defined IP address 192.168.39.198 and MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:58:57.493858   42195 main.go:143] libmachine: Using SSH client type: native
	I1205 06:58:57.494088   42195 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1205 06:58:57.494102   42195 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-755077 && echo "test-preload-755077" | sudo tee /etc/hostname
	I1205 06:58:57.623502   42195 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-755077
	
	I1205 06:58:57.626415   42195 main.go:143] libmachine: domain test-preload-755077 has defined MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:58:57.626928   42195 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ef:ff:0e", ip: ""} in network mk-test-preload-755077: {Iface:virbr1 ExpiryTime:2025-12-05 07:58:55 +0000 UTC Type:0 Mac:52:54:00:ef:ff:0e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:test-preload-755077 Clientid:01:52:54:00:ef:ff:0e}
	I1205 06:58:57.626961   42195 main.go:143] libmachine: domain test-preload-755077 has defined IP address 192.168.39.198 and MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:58:57.627180   42195 main.go:143] libmachine: Using SSH client type: native
	I1205 06:58:57.627455   42195 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1205 06:58:57.627479   42195 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-755077' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-755077/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-755077' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:58:57.750432   42195 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:58:57.750462   42195 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21997-12744/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-12744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-12744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-12744/.minikube}
	I1205 06:58:57.750480   42195 buildroot.go:174] setting up certificates
	I1205 06:58:57.750490   42195 provision.go:84] configureAuth start
	I1205 06:58:57.753359   42195 main.go:143] libmachine: domain test-preload-755077 has defined MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:58:57.753799   42195 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ef:ff:0e", ip: ""} in network mk-test-preload-755077: {Iface:virbr1 ExpiryTime:2025-12-05 07:58:55 +0000 UTC Type:0 Mac:52:54:00:ef:ff:0e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:test-preload-755077 Clientid:01:52:54:00:ef:ff:0e}
	I1205 06:58:57.753822   42195 main.go:143] libmachine: domain test-preload-755077 has defined IP address 192.168.39.198 and MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:58:57.756314   42195 main.go:143] libmachine: domain test-preload-755077 has defined MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:58:57.756702   42195 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ef:ff:0e", ip: ""} in network mk-test-preload-755077: {Iface:virbr1 ExpiryTime:2025-12-05 07:58:55 +0000 UTC Type:0 Mac:52:54:00:ef:ff:0e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:test-preload-755077 Clientid:01:52:54:00:ef:ff:0e}
	I1205 06:58:57.756737   42195 main.go:143] libmachine: domain test-preload-755077 has defined IP address 192.168.39.198 and MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:58:57.756894   42195 provision.go:143] copyHostCerts
	I1205 06:58:57.756964   42195 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12744/.minikube/ca.pem, removing ...
	I1205 06:58:57.756979   42195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12744/.minikube/ca.pem
	I1205 06:58:57.757073   42195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-12744/.minikube/ca.pem (1078 bytes)
	I1205 06:58:57.757209   42195 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12744/.minikube/cert.pem, removing ...
	I1205 06:58:57.757217   42195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12744/.minikube/cert.pem
	I1205 06:58:57.757248   42195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-12744/.minikube/cert.pem (1123 bytes)
	I1205 06:58:57.757320   42195 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12744/.minikube/key.pem, removing ...
	I1205 06:58:57.757332   42195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12744/.minikube/key.pem
	I1205 06:58:57.757356   42195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-12744/.minikube/key.pem (1675 bytes)
	I1205 06:58:57.757415   42195 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-12744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-12744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-12744/.minikube/certs/ca-key.pem org=jenkins.test-preload-755077 san=[127.0.0.1 192.168.39.198 localhost minikube test-preload-755077]
	I1205 06:58:57.789223   42195 provision.go:177] copyRemoteCerts
	I1205 06:58:57.789306   42195 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:58:57.792075   42195 main.go:143] libmachine: domain test-preload-755077 has defined MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:58:57.792444   42195 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ef:ff:0e", ip: ""} in network mk-test-preload-755077: {Iface:virbr1 ExpiryTime:2025-12-05 07:58:55 +0000 UTC Type:0 Mac:52:54:00:ef:ff:0e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:test-preload-755077 Clientid:01:52:54:00:ef:ff:0e}
	I1205 06:58:57.792467   42195 main.go:143] libmachine: domain test-preload-755077 has defined IP address 192.168.39.198 and MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:58:57.792621   42195 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/test-preload-755077/id_rsa Username:docker}
	I1205 06:58:57.879590   42195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 06:58:57.906886   42195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1205 06:58:57.933467   42195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 06:58:57.960769   42195 provision.go:87] duration metric: took 210.268472ms to configureAuth
	I1205 06:58:57.960799   42195 buildroot.go:189] setting minikube options for container-runtime
	I1205 06:58:57.960995   42195 config.go:182] Loaded profile config "test-preload-755077": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:58:57.963649   42195 main.go:143] libmachine: domain test-preload-755077 has defined MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:58:57.964106   42195 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ef:ff:0e", ip: ""} in network mk-test-preload-755077: {Iface:virbr1 ExpiryTime:2025-12-05 07:58:55 +0000 UTC Type:0 Mac:52:54:00:ef:ff:0e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:test-preload-755077 Clientid:01:52:54:00:ef:ff:0e}
	I1205 06:58:57.964140   42195 main.go:143] libmachine: domain test-preload-755077 has defined IP address 192.168.39.198 and MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:58:57.964332   42195 main.go:143] libmachine: Using SSH client type: native
	I1205 06:58:57.964544   42195 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1205 06:58:57.964568   42195 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 06:58:58.228562   42195 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 06:58:58.228587   42195 machine.go:97] duration metric: took 852.77346ms to provisionDockerMachine
	I1205 06:58:58.228601   42195 start.go:293] postStartSetup for "test-preload-755077" (driver="kvm2")
	I1205 06:58:58.228613   42195 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:58:58.228748   42195 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:58:58.231700   42195 main.go:143] libmachine: domain test-preload-755077 has defined MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:58:58.232076   42195 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ef:ff:0e", ip: ""} in network mk-test-preload-755077: {Iface:virbr1 ExpiryTime:2025-12-05 07:58:55 +0000 UTC Type:0 Mac:52:54:00:ef:ff:0e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:test-preload-755077 Clientid:01:52:54:00:ef:ff:0e}
	I1205 06:58:58.232098   42195 main.go:143] libmachine: domain test-preload-755077 has defined IP address 192.168.39.198 and MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:58:58.232224   42195 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/test-preload-755077/id_rsa Username:docker}
	I1205 06:58:58.319264   42195 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:58:58.323978   42195 info.go:137] Remote host: Buildroot 2025.02
	I1205 06:58:58.324013   42195 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12744/.minikube/addons for local assets ...
	I1205 06:58:58.324069   42195 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12744/.minikube/files for local assets ...
	I1205 06:58:58.324154   42195 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-12744/.minikube/files/etc/ssl/certs/167022.pem -> 167022.pem in /etc/ssl/certs
	I1205 06:58:58.324245   42195 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 06:58:58.335381   42195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/files/etc/ssl/certs/167022.pem --> /etc/ssl/certs/167022.pem (1708 bytes)
	I1205 06:58:58.363293   42195 start.go:296] duration metric: took 134.677504ms for postStartSetup
	I1205 06:58:58.363339   42195 fix.go:56] duration metric: took 14.508006241s for fixHost
	I1205 06:58:58.366125   42195 main.go:143] libmachine: domain test-preload-755077 has defined MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:58:58.366621   42195 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ef:ff:0e", ip: ""} in network mk-test-preload-755077: {Iface:virbr1 ExpiryTime:2025-12-05 07:58:55 +0000 UTC Type:0 Mac:52:54:00:ef:ff:0e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:test-preload-755077 Clientid:01:52:54:00:ef:ff:0e}
	I1205 06:58:58.366652   42195 main.go:143] libmachine: domain test-preload-755077 has defined IP address 192.168.39.198 and MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:58:58.366834   42195 main.go:143] libmachine: Using SSH client type: native
	I1205 06:58:58.367022   42195 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1205 06:58:58.367033   42195 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1205 06:58:58.484282   42195 main.go:143] libmachine: SSH cmd err, output: <nil>: 1764917938.441508397
	
	I1205 06:58:58.484309   42195 fix.go:216] guest clock: 1764917938.441508397
	I1205 06:58:58.484319   42195 fix.go:229] Guest: 2025-12-05 06:58:58.441508397 +0000 UTC Remote: 2025-12-05 06:58:58.363344611 +0000 UTC m=+14.605767367 (delta=78.163786ms)
	I1205 06:58:58.484342   42195 fix.go:200] guest clock delta is within tolerance: 78.163786ms
	I1205 06:58:58.484348   42195 start.go:83] releasing machines lock for "test-preload-755077", held for 14.629032618s
	I1205 06:58:58.486931   42195 main.go:143] libmachine: domain test-preload-755077 has defined MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:58:58.487387   42195 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ef:ff:0e", ip: ""} in network mk-test-preload-755077: {Iface:virbr1 ExpiryTime:2025-12-05 07:58:55 +0000 UTC Type:0 Mac:52:54:00:ef:ff:0e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:test-preload-755077 Clientid:01:52:54:00:ef:ff:0e}
	I1205 06:58:58.487416   42195 main.go:143] libmachine: domain test-preload-755077 has defined IP address 192.168.39.198 and MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:58:58.487948   42195 ssh_runner.go:195] Run: cat /version.json
	I1205 06:58:58.488002   42195 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 06:58:58.490767   42195 main.go:143] libmachine: domain test-preload-755077 has defined MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:58:58.491033   42195 main.go:143] libmachine: domain test-preload-755077 has defined MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:58:58.491114   42195 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ef:ff:0e", ip: ""} in network mk-test-preload-755077: {Iface:virbr1 ExpiryTime:2025-12-05 07:58:55 +0000 UTC Type:0 Mac:52:54:00:ef:ff:0e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:test-preload-755077 Clientid:01:52:54:00:ef:ff:0e}
	I1205 06:58:58.491134   42195 main.go:143] libmachine: domain test-preload-755077 has defined IP address 192.168.39.198 and MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:58:58.491315   42195 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/test-preload-755077/id_rsa Username:docker}
	I1205 06:58:58.491463   42195 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ef:ff:0e", ip: ""} in network mk-test-preload-755077: {Iface:virbr1 ExpiryTime:2025-12-05 07:58:55 +0000 UTC Type:0 Mac:52:54:00:ef:ff:0e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:test-preload-755077 Clientid:01:52:54:00:ef:ff:0e}
	I1205 06:58:58.491494   42195 main.go:143] libmachine: domain test-preload-755077 has defined IP address 192.168.39.198 and MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:58:58.491666   42195 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/test-preload-755077/id_rsa Username:docker}
	I1205 06:58:58.602109   42195 ssh_runner.go:195] Run: systemctl --version
	I1205 06:58:58.607807   42195 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 06:58:58.751943   42195 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 06:58:58.759426   42195 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:58:58.759500   42195 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:58:58.778358   42195 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 06:58:58.778383   42195 start.go:496] detecting cgroup driver to use...
	I1205 06:58:58.778443   42195 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 06:58:58.796564   42195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:58:58.812441   42195 docker.go:218] disabling cri-docker service (if available) ...
	I1205 06:58:58.812500   42195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 06:58:58.829307   42195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 06:58:58.844375   42195 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 06:58:58.988901   42195 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 06:58:59.210636   42195 docker.go:234] disabling docker service ...
	I1205 06:58:59.210722   42195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 06:58:59.227298   42195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 06:58:59.241830   42195 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 06:58:59.393180   42195 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 06:58:59.530587   42195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:58:59.546098   42195 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:58:59.567707   42195 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 06:58:59.567808   42195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:58:59.580139   42195 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 06:58:59.580209   42195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:58:59.592158   42195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:58:59.603735   42195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:58:59.616132   42195 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:58:59.629316   42195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:58:59.641265   42195 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:58:59.660367   42195 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:58:59.672494   42195 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:58:59.682637   42195 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 06:58:59.682703   42195 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 06:58:59.701862   42195 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:58:59.712784   42195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:58:59.854527   42195 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 06:58:59.965770   42195 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 06:58:59.965852   42195 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 06:58:59.971144   42195 start.go:564] Will wait 60s for crictl version
	I1205 06:58:59.971208   42195 ssh_runner.go:195] Run: which crictl
	I1205 06:58:59.975142   42195 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 06:59:00.010708   42195 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 06:59:00.010792   42195 ssh_runner.go:195] Run: crio --version
	I1205 06:59:00.039769   42195 ssh_runner.go:195] Run: crio --version
	I1205 06:59:00.070255   42195 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1205 06:59:00.074301   42195 main.go:143] libmachine: domain test-preload-755077 has defined MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:59:00.074701   42195 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ef:ff:0e", ip: ""} in network mk-test-preload-755077: {Iface:virbr1 ExpiryTime:2025-12-05 07:58:55 +0000 UTC Type:0 Mac:52:54:00:ef:ff:0e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:test-preload-755077 Clientid:01:52:54:00:ef:ff:0e}
	I1205 06:59:00.074731   42195 main.go:143] libmachine: domain test-preload-755077 has defined IP address 192.168.39.198 and MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:59:00.074900   42195 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 06:59:00.079414   42195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 06:59:00.093549   42195 kubeadm.go:884] updating cluster {Name:test-preload-755077 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.34.2 ClusterName:test-preload-755077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:59:00.093722   42195 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 06:59:00.093805   42195 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:59:00.127892   42195 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1205 06:59:00.127957   42195 ssh_runner.go:195] Run: which lz4
	I1205 06:59:00.132362   42195 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 06:59:00.137339   42195 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 06:59:00.137369   42195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1205 06:59:01.301391   42195 crio.go:462] duration metric: took 1.16905695s to copy over tarball
	I1205 06:59:01.301470   42195 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 06:59:02.750118   42195 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.448585803s)
	I1205 06:59:02.750151   42195 crio.go:469] duration metric: took 1.448732867s to extract the tarball
	I1205 06:59:02.750161   42195 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 06:59:02.787077   42195 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:59:02.824735   42195 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 06:59:02.824766   42195 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:59:02.824776   42195 kubeadm.go:935] updating node { 192.168.39.198 8443 v1.34.2 crio true true} ...
	I1205 06:59:02.824911   42195 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-755077 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:test-preload-755077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:59:02.825009   42195 ssh_runner.go:195] Run: crio config
	I1205 06:59:02.873449   42195 cni.go:84] Creating CNI manager for ""
	I1205 06:59:02.873469   42195 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 06:59:02.873490   42195 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:59:02.873517   42195 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.198 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-755077 NodeName:test-preload-755077 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.198"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:59:02.873675   42195 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.198
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-755077"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.198"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.198"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:59:02.873767   42195 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1205 06:59:02.885932   42195 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:59:02.886041   42195 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:59:02.897258   42195 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1205 06:59:02.916642   42195 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 06:59:02.936274   42195 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1205 06:59:02.956435   42195 ssh_runner.go:195] Run: grep 192.168.39.198	control-plane.minikube.internal$ /etc/hosts
	I1205 06:59:02.960678   42195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.198	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 06:59:02.974952   42195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:59:03.118450   42195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:59:03.148946   42195 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/test-preload-755077 for IP: 192.168.39.198
	I1205 06:59:03.148976   42195 certs.go:195] generating shared ca certs ...
	I1205 06:59:03.148992   42195 certs.go:227] acquiring lock for ca certs: {Name:mk31e04487a5cf4ece02d9725a994239b98a3eba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:59:03.149150   42195 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-12744/.minikube/ca.key
	I1205 06:59:03.149191   42195 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-12744/.minikube/proxy-client-ca.key
	I1205 06:59:03.149201   42195 certs.go:257] generating profile certs ...
	I1205 06:59:03.149289   42195 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/test-preload-755077/client.key
	I1205 06:59:03.149352   42195 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/test-preload-755077/apiserver.key.b66c418d
	I1205 06:59:03.149390   42195 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/test-preload-755077/proxy-client.key
	I1205 06:59:03.149497   42195 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12744/.minikube/certs/16702.pem (1338 bytes)
	W1205 06:59:03.149535   42195 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-12744/.minikube/certs/16702_empty.pem, impossibly tiny 0 bytes
	I1205 06:59:03.149545   42195 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12744/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 06:59:03.149572   42195 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12744/.minikube/certs/ca.pem (1078 bytes)
	I1205 06:59:03.149597   42195 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12744/.minikube/certs/cert.pem (1123 bytes)
	I1205 06:59:03.149622   42195 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12744/.minikube/certs/key.pem (1675 bytes)
	I1205 06:59:03.149665   42195 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12744/.minikube/files/etc/ssl/certs/167022.pem (1708 bytes)
	I1205 06:59:03.150253   42195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:59:03.185071   42195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:59:03.215029   42195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:59:03.247949   42195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 06:59:03.275939   42195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/test-preload-755077/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1205 06:59:03.304329   42195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/test-preload-755077/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 06:59:03.333028   42195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/test-preload-755077/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:59:03.360781   42195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/test-preload-755077/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 06:59:03.388607   42195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/certs/16702.pem --> /usr/share/ca-certificates/16702.pem (1338 bytes)
	I1205 06:59:03.417043   42195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/files/etc/ssl/certs/167022.pem --> /usr/share/ca-certificates/167022.pem (1708 bytes)
	I1205 06:59:03.445883   42195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:59:03.474178   42195 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:59:03.493760   42195 ssh_runner.go:195] Run: openssl version
	I1205 06:59:03.500219   42195 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16702.pem
	I1205 06:59:03.511285   42195 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16702.pem /etc/ssl/certs/16702.pem
	I1205 06:59:03.522319   42195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16702.pem
	I1205 06:59:03.527256   42195 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:17 /usr/share/ca-certificates/16702.pem
	I1205 06:59:03.527316   42195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16702.pem
	I1205 06:59:03.534466   42195 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 06:59:03.545827   42195 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/16702.pem /etc/ssl/certs/51391683.0
	I1205 06:59:03.556851   42195 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/167022.pem
	I1205 06:59:03.567989   42195 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/167022.pem /etc/ssl/certs/167022.pem
	I1205 06:59:03.578873   42195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167022.pem
	I1205 06:59:03.583725   42195 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:17 /usr/share/ca-certificates/167022.pem
	I1205 06:59:03.583778   42195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167022.pem
	I1205 06:59:03.590782   42195 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 06:59:03.601774   42195 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/167022.pem /etc/ssl/certs/3ec20f2e.0
	I1205 06:59:03.612612   42195 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:59:03.623863   42195 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:59:03.634886   42195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:59:03.640262   42195 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:05 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:59:03.640325   42195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:59:03.647572   42195 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:59:03.658762   42195 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 06:59:03.669622   42195 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:59:03.674626   42195 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 06:59:03.681915   42195 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 06:59:03.688922   42195 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 06:59:03.696275   42195 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 06:59:03.703457   42195 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 06:59:03.710332   42195 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 06:59:03.717356   42195 kubeadm.go:401] StartCluster: {Name:test-preload-755077 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
34.2 ClusterName:test-preload-755077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:59:03.717432   42195 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:59:03.717477   42195 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:59:03.750326   42195 cri.go:89] found id: ""
	I1205 06:59:03.750400   42195 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:59:03.762604   42195 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 06:59:03.762630   42195 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 06:59:03.762696   42195 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 06:59:03.774296   42195 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:59:03.774669   42195 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-755077" does not appear in /home/jenkins/minikube-integration/21997-12744/kubeconfig
	I1205 06:59:03.774780   42195 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-12744/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-755077" cluster setting kubeconfig missing "test-preload-755077" context setting]
	I1205 06:59:03.775056   42195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12744/kubeconfig: {Name:mka919c4eb7b6e761ae422db15b3daf8c8fde4d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:59:03.775548   42195 kapi.go:59] client config for test-preload-755077: &rest.Config{Host:"https://192.168.39.198:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-12744/.minikube/profiles/test-preload-755077/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-12744/.minikube/profiles/test-preload-755077/client.key", CAFile:"/home/jenkins/minikube-integration/21997-12744/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:59:03.775979   42195 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1205 06:59:03.776001   42195 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 06:59:03.776006   42195 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1205 06:59:03.776011   42195 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 06:59:03.776015   42195 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1205 06:59:03.776334   42195 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 06:59:03.787136   42195 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.198
	I1205 06:59:03.787169   42195 kubeadm.go:1161] stopping kube-system containers ...
	I1205 06:59:03.787182   42195 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 06:59:03.787242   42195 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:59:03.821227   42195 cri.go:89] found id: ""
	I1205 06:59:03.821292   42195 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 06:59:03.845198   42195 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:59:03.856816   42195 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:59:03.856839   42195 kubeadm.go:158] found existing configuration files:
	
	I1205 06:59:03.856893   42195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 06:59:03.866948   42195 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:59:03.867023   42195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:59:03.878186   42195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 06:59:03.888233   42195 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:59:03.888302   42195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:59:03.899301   42195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 06:59:03.909977   42195 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:59:03.910068   42195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:59:03.921099   42195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 06:59:03.931962   42195 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:59:03.932019   42195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:59:03.943937   42195 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 06:59:03.956089   42195 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:59:04.011840   42195 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:59:05.798602   42195 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.786721524s)
	I1205 06:59:05.798701   42195 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:59:06.067382   42195 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:59:06.138471   42195 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:59:06.230721   42195 api_server.go:52] waiting for apiserver process to appear ...
	I1205 06:59:06.230832   42195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:59:06.730968   42195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:59:07.231090   42195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:59:07.731299   42195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:59:07.765626   42195 api_server.go:72] duration metric: took 1.534919848s to wait for apiserver process to appear ...
	I1205 06:59:07.765656   42195 api_server.go:88] waiting for apiserver healthz status ...
	I1205 06:59:07.765716   42195 api_server.go:253] Checking apiserver healthz at https://192.168.39.198:8443/healthz ...
	I1205 06:59:10.239668   42195 api_server.go:279] https://192.168.39.198:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 06:59:10.239721   42195 api_server.go:103] status: https://192.168.39.198:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 06:59:10.239739   42195 api_server.go:253] Checking apiserver healthz at https://192.168.39.198:8443/healthz ...
	I1205 06:59:10.351339   42195 api_server.go:279] https://192.168.39.198:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 06:59:10.351369   42195 api_server.go:103] status: https://192.168.39.198:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 06:59:10.351392   42195 api_server.go:253] Checking apiserver healthz at https://192.168.39.198:8443/healthz ...
	I1205 06:59:10.359362   42195 api_server.go:279] https://192.168.39.198:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 06:59:10.359386   42195 api_server.go:103] status: https://192.168.39.198:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 06:59:10.765906   42195 api_server.go:253] Checking apiserver healthz at https://192.168.39.198:8443/healthz ...
	I1205 06:59:10.770353   42195 api_server.go:279] https://192.168.39.198:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 06:59:10.770379   42195 api_server.go:103] status: https://192.168.39.198:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 06:59:11.266008   42195 api_server.go:253] Checking apiserver healthz at https://192.168.39.198:8443/healthz ...
	I1205 06:59:11.289041   42195 api_server.go:279] https://192.168.39.198:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 06:59:11.289066   42195 api_server.go:103] status: https://192.168.39.198:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 06:59:11.766729   42195 api_server.go:253] Checking apiserver healthz at https://192.168.39.198:8443/healthz ...
	I1205 06:59:11.773021   42195 api_server.go:279] https://192.168.39.198:8443/healthz returned 200:
	ok
	I1205 06:59:11.779613   42195 api_server.go:141] control plane version: v1.34.2
	I1205 06:59:11.779647   42195 api_server.go:131] duration metric: took 4.013983187s to wait for apiserver health ...
	I1205 06:59:11.779658   42195 cni.go:84] Creating CNI manager for ""
	I1205 06:59:11.779666   42195 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 06:59:11.781210   42195 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 06:59:11.782777   42195 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 06:59:11.800303   42195 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 06:59:11.827276   42195 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 06:59:11.832823   42195 system_pods.go:59] 7 kube-system pods found
	I1205 06:59:11.832871   42195 system_pods.go:61] "coredns-66bc5c9577-67p6x" [21f85926-5250-4dc9-aeae-bc72444916fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 06:59:11.832883   42195 system_pods.go:61] "etcd-test-preload-755077" [1f6b8857-f635-4db3-beb7-11284f612642] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 06:59:11.832896   42195 system_pods.go:61] "kube-apiserver-test-preload-755077" [92e499f8-7c8c-4c2a-b542-307fe7bebd49] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 06:59:11.832912   42195 system_pods.go:61] "kube-controller-manager-test-preload-755077" [9f6122cd-db6b-48b2-a7f8-1cc73f51475b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 06:59:11.832921   42195 system_pods.go:61] "kube-proxy-ctlgj" [ca360ad9-341d-4d31-8cf0-33aae295b3a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 06:59:11.832933   42195 system_pods.go:61] "kube-scheduler-test-preload-755077" [5f5f8aad-776b-4dc2-aae4-48fb55fdc63e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 06:59:11.832980   42195 system_pods.go:61] "storage-provisioner" [0194a935-5452-4982-9002-9673ebc2f246] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 06:59:11.832994   42195 system_pods.go:74] duration metric: took 5.694637ms to wait for pod list to return data ...
	I1205 06:59:11.833004   42195 node_conditions.go:102] verifying NodePressure condition ...
	I1205 06:59:11.836850   42195 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 06:59:11.836874   42195 node_conditions.go:123] node cpu capacity is 2
	I1205 06:59:11.836889   42195 node_conditions.go:105] duration metric: took 3.880013ms to run NodePressure ...
	I1205 06:59:11.836941   42195 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:59:12.106342   42195 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1205 06:59:12.110046   42195 kubeadm.go:744] kubelet initialised
	I1205 06:59:12.110074   42195 kubeadm.go:745] duration metric: took 3.705825ms waiting for restarted kubelet to initialise ...
	I1205 06:59:12.110092   42195 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 06:59:12.125550   42195 ops.go:34] apiserver oom_adj: -16
	I1205 06:59:12.125574   42195 kubeadm.go:602] duration metric: took 8.362937221s to restartPrimaryControlPlane
	I1205 06:59:12.125582   42195 kubeadm.go:403] duration metric: took 8.408233596s to StartCluster
	I1205 06:59:12.125599   42195 settings.go:142] acquiring lock: {Name:mk2f276bdecf61f8264687dd612372cc78cfacbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:59:12.125679   42195 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-12744/kubeconfig
	I1205 06:59:12.126394   42195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12744/kubeconfig: {Name:mka919c4eb7b6e761ae422db15b3daf8c8fde4d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:59:12.126675   42195 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 06:59:12.126734   42195 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 06:59:12.126817   42195 addons.go:70] Setting storage-provisioner=true in profile "test-preload-755077"
	I1205 06:59:12.126836   42195 addons.go:239] Setting addon storage-provisioner=true in "test-preload-755077"
	W1205 06:59:12.126855   42195 addons.go:248] addon storage-provisioner should already be in state true
	I1205 06:59:12.126853   42195 addons.go:70] Setting default-storageclass=true in profile "test-preload-755077"
	I1205 06:59:12.126877   42195 host.go:66] Checking if "test-preload-755077" exists ...
	I1205 06:59:12.126885   42195 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-755077"
	I1205 06:59:12.126941   42195 config.go:182] Loaded profile config "test-preload-755077": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:59:12.128372   42195 out.go:179] * Verifying Kubernetes components...
	I1205 06:59:12.129358   42195 kapi.go:59] client config for test-preload-755077: &rest.Config{Host:"https://192.168.39.198:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-12744/.minikube/profiles/test-preload-755077/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-12744/.minikube/profiles/test-preload-755077/client.key", CAFile:"/home/jenkins/minikube-integration/21997-12744/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:59:12.129581   42195 addons.go:239] Setting addon default-storageclass=true in "test-preload-755077"
	W1205 06:59:12.129593   42195 addons.go:248] addon default-storageclass should already be in state true
	I1205 06:59:12.129610   42195 host.go:66] Checking if "test-preload-755077" exists ...
	I1205 06:59:12.129731   42195 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:59:12.129765   42195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:59:12.131223   42195 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 06:59:12.131241   42195 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 06:59:12.131277   42195 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:59:12.131295   42195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 06:59:12.134182   42195 main.go:143] libmachine: domain test-preload-755077 has defined MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:59:12.134341   42195 main.go:143] libmachine: domain test-preload-755077 has defined MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:59:12.134647   42195 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ef:ff:0e", ip: ""} in network mk-test-preload-755077: {Iface:virbr1 ExpiryTime:2025-12-05 07:58:55 +0000 UTC Type:0 Mac:52:54:00:ef:ff:0e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:test-preload-755077 Clientid:01:52:54:00:ef:ff:0e}
	I1205 06:59:12.134694   42195 main.go:143] libmachine: domain test-preload-755077 has defined IP address 192.168.39.198 and MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:59:12.134753   42195 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ef:ff:0e", ip: ""} in network mk-test-preload-755077: {Iface:virbr1 ExpiryTime:2025-12-05 07:58:55 +0000 UTC Type:0 Mac:52:54:00:ef:ff:0e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:test-preload-755077 Clientid:01:52:54:00:ef:ff:0e}
	I1205 06:59:12.134791   42195 main.go:143] libmachine: domain test-preload-755077 has defined IP address 192.168.39.198 and MAC address 52:54:00:ef:ff:0e in network mk-test-preload-755077
	I1205 06:59:12.134853   42195 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/test-preload-755077/id_rsa Username:docker}
	I1205 06:59:12.135127   42195 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/test-preload-755077/id_rsa Username:docker}
	I1205 06:59:12.406707   42195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:59:12.434658   42195 node_ready.go:35] waiting up to 6m0s for node "test-preload-755077" to be "Ready" ...
	I1205 06:59:12.481123   42195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:59:12.533388   42195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:59:13.130699   42195 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1205 06:59:13.131750   42195 addons.go:530] duration metric: took 1.005022108s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1205 06:59:14.439116   42195 node_ready.go:57] node "test-preload-755077" has "Ready":"False" status (will retry)
	W1205 06:59:16.938010   42195 node_ready.go:57] node "test-preload-755077" has "Ready":"False" status (will retry)
	W1205 06:59:19.438470   42195 node_ready.go:57] node "test-preload-755077" has "Ready":"False" status (will retry)
	I1205 06:59:20.939129   42195 node_ready.go:49] node "test-preload-755077" is "Ready"
	I1205 06:59:20.939159   42195 node_ready.go:38] duration metric: took 8.50445654s for node "test-preload-755077" to be "Ready" ...
	I1205 06:59:20.939177   42195 api_server.go:52] waiting for apiserver process to appear ...
	I1205 06:59:20.939234   42195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:59:20.962267   42195 api_server.go:72] duration metric: took 8.835530823s to wait for apiserver process to appear ...
	I1205 06:59:20.962295   42195 api_server.go:88] waiting for apiserver healthz status ...
	I1205 06:59:20.962317   42195 api_server.go:253] Checking apiserver healthz at https://192.168.39.198:8443/healthz ...
	I1205 06:59:20.967628   42195 api_server.go:279] https://192.168.39.198:8443/healthz returned 200:
	ok
	I1205 06:59:20.968956   42195 api_server.go:141] control plane version: v1.34.2
	I1205 06:59:20.968977   42195 api_server.go:131] duration metric: took 6.67704ms to wait for apiserver health ...
	I1205 06:59:20.968985   42195 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 06:59:20.974135   42195 system_pods.go:59] 7 kube-system pods found
	I1205 06:59:20.974159   42195 system_pods.go:61] "coredns-66bc5c9577-67p6x" [21f85926-5250-4dc9-aeae-bc72444916fa] Running
	I1205 06:59:20.974176   42195 system_pods.go:61] "etcd-test-preload-755077" [1f6b8857-f635-4db3-beb7-11284f612642] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 06:59:20.974191   42195 system_pods.go:61] "kube-apiserver-test-preload-755077" [92e499f8-7c8c-4c2a-b542-307fe7bebd49] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 06:59:20.974204   42195 system_pods.go:61] "kube-controller-manager-test-preload-755077" [9f6122cd-db6b-48b2-a7f8-1cc73f51475b] Running
	I1205 06:59:20.974217   42195 system_pods.go:61] "kube-proxy-ctlgj" [ca360ad9-341d-4d31-8cf0-33aae295b3a9] Running
	I1205 06:59:20.974226   42195 system_pods.go:61] "kube-scheduler-test-preload-755077" [5f5f8aad-776b-4dc2-aae4-48fb55fdc63e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 06:59:20.974235   42195 system_pods.go:61] "storage-provisioner" [0194a935-5452-4982-9002-9673ebc2f246] Running
	I1205 06:59:20.974244   42195 system_pods.go:74] duration metric: took 5.252843ms to wait for pod list to return data ...
	I1205 06:59:20.974255   42195 default_sa.go:34] waiting for default service account to be created ...
	I1205 06:59:20.977320   42195 default_sa.go:45] found service account: "default"
	I1205 06:59:20.977339   42195 default_sa.go:55] duration metric: took 3.079255ms for default service account to be created ...
	I1205 06:59:20.977346   42195 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 06:59:20.981286   42195 system_pods.go:86] 7 kube-system pods found
	I1205 06:59:20.981311   42195 system_pods.go:89] "coredns-66bc5c9577-67p6x" [21f85926-5250-4dc9-aeae-bc72444916fa] Running
	I1205 06:59:20.981322   42195 system_pods.go:89] "etcd-test-preload-755077" [1f6b8857-f635-4db3-beb7-11284f612642] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 06:59:20.981336   42195 system_pods.go:89] "kube-apiserver-test-preload-755077" [92e499f8-7c8c-4c2a-b542-307fe7bebd49] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 06:59:20.981348   42195 system_pods.go:89] "kube-controller-manager-test-preload-755077" [9f6122cd-db6b-48b2-a7f8-1cc73f51475b] Running
	I1205 06:59:20.981354   42195 system_pods.go:89] "kube-proxy-ctlgj" [ca360ad9-341d-4d31-8cf0-33aae295b3a9] Running
	I1205 06:59:20.981366   42195 system_pods.go:89] "kube-scheduler-test-preload-755077" [5f5f8aad-776b-4dc2-aae4-48fb55fdc63e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 06:59:20.981374   42195 system_pods.go:89] "storage-provisioner" [0194a935-5452-4982-9002-9673ebc2f246] Running
	I1205 06:59:20.981382   42195 system_pods.go:126] duration metric: took 4.03096ms to wait for k8s-apps to be running ...
	I1205 06:59:20.981395   42195 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 06:59:20.981442   42195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:59:20.998157   42195 system_svc.go:56] duration metric: took 16.750929ms WaitForService to wait for kubelet
	I1205 06:59:20.998196   42195 kubeadm.go:587] duration metric: took 8.871465444s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 06:59:20.998217   42195 node_conditions.go:102] verifying NodePressure condition ...
	I1205 06:59:21.001531   42195 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 06:59:21.001559   42195 node_conditions.go:123] node cpu capacity is 2
	I1205 06:59:21.001575   42195 node_conditions.go:105] duration metric: took 3.350736ms to run NodePressure ...
	I1205 06:59:21.001590   42195 start.go:242] waiting for startup goroutines ...
	I1205 06:59:21.001601   42195 start.go:247] waiting for cluster config update ...
	I1205 06:59:21.001616   42195 start.go:256] writing updated cluster config ...
	I1205 06:59:21.001923   42195 ssh_runner.go:195] Run: rm -f paused
	I1205 06:59:21.007167   42195 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 06:59:21.007638   42195 kapi.go:59] client config for test-preload-755077: &rest.Config{Host:"https://192.168.39.198:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-12744/.minikube/profiles/test-preload-755077/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-12744/.minikube/profiles/test-preload-755077/client.key", CAFile:"/home/jenkins/minikube-integration/21997-12744/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:59:21.011160   42195 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-67p6x" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:59:21.015807   42195 pod_ready.go:94] pod "coredns-66bc5c9577-67p6x" is "Ready"
	I1205 06:59:21.015833   42195 pod_ready.go:86] duration metric: took 4.652475ms for pod "coredns-66bc5c9577-67p6x" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:59:21.017634   42195 pod_ready.go:83] waiting for pod "etcd-test-preload-755077" in "kube-system" namespace to be "Ready" or be gone ...
	W1205 06:59:23.024806   42195 pod_ready.go:104] pod "etcd-test-preload-755077" is not "Ready", error: <nil>
	I1205 06:59:25.524399   42195 pod_ready.go:94] pod "etcd-test-preload-755077" is "Ready"
	I1205 06:59:25.524427   42195 pod_ready.go:86] duration metric: took 4.506773177s for pod "etcd-test-preload-755077" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:59:25.526898   42195 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-755077" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:59:25.531332   42195 pod_ready.go:94] pod "kube-apiserver-test-preload-755077" is "Ready"
	I1205 06:59:25.531353   42195 pod_ready.go:86] duration metric: took 4.430762ms for pod "kube-apiserver-test-preload-755077" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:59:25.533455   42195 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-755077" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:59:25.537806   42195 pod_ready.go:94] pod "kube-controller-manager-test-preload-755077" is "Ready"
	I1205 06:59:25.537826   42195 pod_ready.go:86] duration metric: took 4.353673ms for pod "kube-controller-manager-test-preload-755077" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:59:25.539521   42195 pod_ready.go:83] waiting for pod "kube-proxy-ctlgj" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:59:25.721852   42195 pod_ready.go:94] pod "kube-proxy-ctlgj" is "Ready"
	I1205 06:59:25.721879   42195 pod_ready.go:86] duration metric: took 182.335887ms for pod "kube-proxy-ctlgj" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:59:25.921294   42195 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-755077" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:59:27.122130   42195 pod_ready.go:94] pod "kube-scheduler-test-preload-755077" is "Ready"
	I1205 06:59:27.122155   42195 pod_ready.go:86] duration metric: took 1.200833612s for pod "kube-scheduler-test-preload-755077" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:59:27.122167   42195 pod_ready.go:40] duration metric: took 6.11497055s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 06:59:27.163027   42195 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1205 06:59:27.164555   42195 out.go:179] * Done! kubectl is now configured to use "test-preload-755077" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 06:59:27 test-preload-755077 crio[831]: time="2025-12-05 06:59:27.895586783Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764917967895564726,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=35af1906-2b5a-425d-a70a-3d57e71ebeae name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 06:59:27 test-preload-755077 crio[831]: time="2025-12-05 06:59:27.896414263Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cd585851-b156-428c-ba76-86c1d6cdf1ce name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 06:59:27 test-preload-755077 crio[831]: time="2025-12-05 06:59:27.896572712Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cd585851-b156-428c-ba76-86c1d6cdf1ce name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 06:59:27 test-preload-755077 crio[831]: time="2025-12-05 06:59:27.897097752Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d15ae53cbce5625d4b8edcfa1b0dd8e8e7ce6730d7b19f7124ba5dcac05d3599,PodSandboxId:67f77b7434196c39030e565b462aabf96987a2901d27352cde54eb3ed59ce8c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764917959212940751,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-67p6x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21f85926-5250-4dc9-aeae-bc72444916fa,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83396e3c7e5590c8edb194ba0e97541a5dbb841e7a44ff3fa0d55fa3d7dcd78c,PodSandboxId:f3f72f6ad9e189856973103ec8310e6215e836b864cd3cfd87bac2839e2b7d89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764917951623480390,Labe
ls:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0194a935-5452-4982-9002-9673ebc2f246,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0eb26d8a947be3c80d73bbe7a0985a1298f68623a3e8fd645f5464e545899a5,PodSandboxId:72b09fb9d6164f5aea0811191162706da9deda90de0ccef7da0d6f449ab41329,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1764917951597375626,Labels:map[string]string{
io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ctlgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca360ad9-341d-4d31-8cf0-33aae295b3a9,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ba6ceb13a50115b472f6f8771865da38d2e7ac96e99909cc405bde24246726f,PodSandboxId:539ceb7c33627b10cbe16b42e98afa7defa1e1b12c4182bc8850f352aa2fabf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1764917947364642491,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-755077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c18d0f5a5d504711b476dc186077b3e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f623b814708f189a004045eaf417fcc829a6987724f614bc6877a54f88a6026,PodSandboxId:d8d4a514a4b62b73862134ef363e7501ed6d8003fb49e48740f6c9a62211b6c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,Crea
tedAt:1764917947342209415,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-755077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d91dc472ae200a6e65c2c1edcb1b38c0,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:463573787d93295d3414cf89078e8d2c15e95fb59b515184a9ce69d3166250f8,PodSandboxId:b4c85b400606e67ddb9084bdac937fc019c918f67c84a9ef82d0d4699cfd8113,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1764917947319996626,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-755077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9366ae24310b1019e4240b72e096fe1,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aace3bb431811c670ff0069480a874a9104b6757f32c8772e1a35a91d3272e9b,PodSandboxId:baafb01ba24da7db19bc0fc2290cb6fc205448446e23350ba006441d55418034,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1764917947323933808,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-755077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d539c266be1e9a0f9bc3bfc2864fb496,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cd585851-b156-428c-ba76-86c1d6cdf1ce name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 05 06:59:27 test-preload-755077 crio[831]: time="2025-12-05 06:59:27.929466049Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7e3e7fc6-625b-4566-b599-b16226141387 name=/runtime.v1.RuntimeService/Version
	Dec 05 06:59:27 test-preload-755077 crio[831]: time="2025-12-05 06:59:27.929717659Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7e3e7fc6-625b-4566-b599-b16226141387 name=/runtime.v1.RuntimeService/Version
	Dec 05 06:59:27 test-preload-755077 crio[831]: time="2025-12-05 06:59:27.931054146Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=df164ce7-e5d7-4f69-9993-e847fb33596d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 06:59:27 test-preload-755077 crio[831]: time="2025-12-05 06:59:27.931635129Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764917967931614022,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df164ce7-e5d7-4f69-9993-e847fb33596d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 06:59:27 test-preload-755077 crio[831]: time="2025-12-05 06:59:27.932508166Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=366e8680-b6ac-4d09-af20-920ca1104841 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 06:59:27 test-preload-755077 crio[831]: time="2025-12-05 06:59:27.932562974Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=366e8680-b6ac-4d09-af20-920ca1104841 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 06:59:27 test-preload-755077 crio[831]: time="2025-12-05 06:59:27.932723927Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d15ae53cbce5625d4b8edcfa1b0dd8e8e7ce6730d7b19f7124ba5dcac05d3599,PodSandboxId:67f77b7434196c39030e565b462aabf96987a2901d27352cde54eb3ed59ce8c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764917959212940751,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-67p6x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21f85926-5250-4dc9-aeae-bc72444916fa,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83396e3c7e5590c8edb194ba0e97541a5dbb841e7a44ff3fa0d55fa3d7dcd78c,PodSandboxId:f3f72f6ad9e189856973103ec8310e6215e836b864cd3cfd87bac2839e2b7d89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764917951623480390,Labe
ls:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0194a935-5452-4982-9002-9673ebc2f246,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0eb26d8a947be3c80d73bbe7a0985a1298f68623a3e8fd645f5464e545899a5,PodSandboxId:72b09fb9d6164f5aea0811191162706da9deda90de0ccef7da0d6f449ab41329,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1764917951597375626,Labels:map[string]string{
io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ctlgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca360ad9-341d-4d31-8cf0-33aae295b3a9,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ba6ceb13a50115b472f6f8771865da38d2e7ac96e99909cc405bde24246726f,PodSandboxId:539ceb7c33627b10cbe16b42e98afa7defa1e1b12c4182bc8850f352aa2fabf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1764917947364642491,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-755077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c18d0f5a5d504711b476dc186077b3e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f623b814708f189a004045eaf417fcc829a6987724f614bc6877a54f88a6026,PodSandboxId:d8d4a514a4b62b73862134ef363e7501ed6d8003fb49e48740f6c9a62211b6c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,Crea
tedAt:1764917947342209415,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-755077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d91dc472ae200a6e65c2c1edcb1b38c0,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:463573787d93295d3414cf89078e8d2c15e95fb59b515184a9ce69d3166250f8,PodSandboxId:b4c85b400606e67ddb9084bdac937fc019c918f67c84a9ef82d0d4699cfd8113,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1764917947319996626,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-755077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9366ae24310b1019e4240b72e096fe1,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aace3bb431811c670ff0069480a874a9104b6757f32c8772e1a35a91d3272e9b,PodSandboxId:baafb01ba24da7db19bc0fc2290cb6fc205448446e23350ba006441d55418034,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1764917947323933808,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-755077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d539c266be1e9a0f9bc3bfc2864fb496,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=366e8680-b6ac-4d09-af20-920ca1104841 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 05 06:59:27 test-preload-755077 crio[831]: time="2025-12-05 06:59:27.964136442Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b4bfbb9d-e7c4-4862-b765-9fe35fa84967 name=/runtime.v1.RuntimeService/Version
	Dec 05 06:59:27 test-preload-755077 crio[831]: time="2025-12-05 06:59:27.964357693Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b4bfbb9d-e7c4-4862-b765-9fe35fa84967 name=/runtime.v1.RuntimeService/Version
	Dec 05 06:59:27 test-preload-755077 crio[831]: time="2025-12-05 06:59:27.965897675Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=97ad9f50-f981-4307-b0f8-33690169945a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 06:59:27 test-preload-755077 crio[831]: time="2025-12-05 06:59:27.966391958Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764917967966350002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=97ad9f50-f981-4307-b0f8-33690169945a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 06:59:27 test-preload-755077 crio[831]: time="2025-12-05 06:59:27.967098257Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e030c32-069f-442c-a661-cde4c5404fb5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 06:59:27 test-preload-755077 crio[831]: time="2025-12-05 06:59:27.967320321Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e030c32-069f-442c-a661-cde4c5404fb5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 06:59:27 test-preload-755077 crio[831]: time="2025-12-05 06:59:27.967697671Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d15ae53cbce5625d4b8edcfa1b0dd8e8e7ce6730d7b19f7124ba5dcac05d3599,PodSandboxId:67f77b7434196c39030e565b462aabf96987a2901d27352cde54eb3ed59ce8c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764917959212940751,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-67p6x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21f85926-5250-4dc9-aeae-bc72444916fa,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83396e3c7e5590c8edb194ba0e97541a5dbb841e7a44ff3fa0d55fa3d7dcd78c,PodSandboxId:f3f72f6ad9e189856973103ec8310e6215e836b864cd3cfd87bac2839e2b7d89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764917951623480390,Labe
ls:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0194a935-5452-4982-9002-9673ebc2f246,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0eb26d8a947be3c80d73bbe7a0985a1298f68623a3e8fd645f5464e545899a5,PodSandboxId:72b09fb9d6164f5aea0811191162706da9deda90de0ccef7da0d6f449ab41329,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1764917951597375626,Labels:map[string]string{
io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ctlgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca360ad9-341d-4d31-8cf0-33aae295b3a9,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ba6ceb13a50115b472f6f8771865da38d2e7ac96e99909cc405bde24246726f,PodSandboxId:539ceb7c33627b10cbe16b42e98afa7defa1e1b12c4182bc8850f352aa2fabf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1764917947364642491,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-755077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c18d0f5a5d504711b476dc186077b3e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f623b814708f189a004045eaf417fcc829a6987724f614bc6877a54f88a6026,PodSandboxId:d8d4a514a4b62b73862134ef363e7501ed6d8003fb49e48740f6c9a62211b6c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,Crea
tedAt:1764917947342209415,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-755077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d91dc472ae200a6e65c2c1edcb1b38c0,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:463573787d93295d3414cf89078e8d2c15e95fb59b515184a9ce69d3166250f8,PodSandboxId:b4c85b400606e67ddb9084bdac937fc019c918f67c84a9ef82d0d4699cfd8113,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1764917947319996626,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-755077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9366ae24310b1019e4240b72e096fe1,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aace3bb431811c670ff0069480a874a9104b6757f32c8772e1a35a91d3272e9b,PodSandboxId:baafb01ba24da7db19bc0fc2290cb6fc205448446e23350ba006441d55418034,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1764917947323933808,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-755077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d539c266be1e9a0f9bc3bfc2864fb496,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2e030c32-069f-442c-a661-cde4c5404fb5 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 05 06:59:27 test-preload-755077 crio[831]: time="2025-12-05 06:59:27.995095796Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3cfd1f19-b22b-4de5-8fc6-5956a1764495 name=/runtime.v1.RuntimeService/Version
	Dec 05 06:59:27 test-preload-755077 crio[831]: time="2025-12-05 06:59:27.995318684Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3cfd1f19-b22b-4de5-8fc6-5956a1764495 name=/runtime.v1.RuntimeService/Version
	Dec 05 06:59:27 test-preload-755077 crio[831]: time="2025-12-05 06:59:27.997286144Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=91297a94-1103-455d-ba34-0dd7bdd12f7d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 06:59:27 test-preload-755077 crio[831]: time="2025-12-05 06:59:27.997718945Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764917967997689146,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=91297a94-1103-455d-ba34-0dd7bdd12f7d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 06:59:28 test-preload-755077 crio[831]: time="2025-12-05 06:59:28.001543189Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2235f074-17b9-4e34-b51c-0288635ec64e name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 06:59:28 test-preload-755077 crio[831]: time="2025-12-05 06:59:28.001673209Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2235f074-17b9-4e34-b51c-0288635ec64e name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 06:59:28 test-preload-755077 crio[831]: time="2025-12-05 06:59:28.001924866Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d15ae53cbce5625d4b8edcfa1b0dd8e8e7ce6730d7b19f7124ba5dcac05d3599,PodSandboxId:67f77b7434196c39030e565b462aabf96987a2901d27352cde54eb3ed59ce8c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764917959212940751,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-67p6x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21f85926-5250-4dc9-aeae-bc72444916fa,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83396e3c7e5590c8edb194ba0e97541a5dbb841e7a44ff3fa0d55fa3d7dcd78c,PodSandboxId:f3f72f6ad9e189856973103ec8310e6215e836b864cd3cfd87bac2839e2b7d89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764917951623480390,Labe
ls:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0194a935-5452-4982-9002-9673ebc2f246,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0eb26d8a947be3c80d73bbe7a0985a1298f68623a3e8fd645f5464e545899a5,PodSandboxId:72b09fb9d6164f5aea0811191162706da9deda90de0ccef7da0d6f449ab41329,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1764917951597375626,Labels:map[string]string{
io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ctlgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca360ad9-341d-4d31-8cf0-33aae295b3a9,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ba6ceb13a50115b472f6f8771865da38d2e7ac96e99909cc405bde24246726f,PodSandboxId:539ceb7c33627b10cbe16b42e98afa7defa1e1b12c4182bc8850f352aa2fabf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1764917947364642491,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-755077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c18d0f5a5d504711b476dc186077b3e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f623b814708f189a004045eaf417fcc829a6987724f614bc6877a54f88a6026,PodSandboxId:d8d4a514a4b62b73862134ef363e7501ed6d8003fb49e48740f6c9a62211b6c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,Crea
tedAt:1764917947342209415,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-755077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d91dc472ae200a6e65c2c1edcb1b38c0,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:463573787d93295d3414cf89078e8d2c15e95fb59b515184a9ce69d3166250f8,PodSandboxId:b4c85b400606e67ddb9084bdac937fc019c918f67c84a9ef82d0d4699cfd8113,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1764917947319996626,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-755077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9366ae24310b1019e4240b72e096fe1,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aace3bb431811c670ff0069480a874a9104b6757f32c8772e1a35a91d3272e9b,PodSandboxId:baafb01ba24da7db19bc0fc2290cb6fc205448446e23350ba006441d55418034,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1764917947323933808,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-755077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d539c266be1e9a0f9bc3bfc2864fb496,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2235f074-17b9-4e34-b51c-0288635ec64e name=/runtime.v1.RuntimeServic
e/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	d15ae53cbce56       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   8 seconds ago       Running             coredns                   1                   67f77b7434196       coredns-66bc5c9577-67p6x                      kube-system
	83396e3c7e559       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       1                   f3f72f6ad9e18       storage-provisioner                           kube-system
	a0eb26d8a947b       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   16 seconds ago      Running             kube-proxy                1                   72b09fb9d6164       kube-proxy-ctlgj                              kube-system
	0ba6ceb13a501       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   20 seconds ago      Running             etcd                      1                   539ceb7c33627       etcd-test-preload-755077                      kube-system
	1f623b814708f       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   20 seconds ago      Running             kube-scheduler            1                   d8d4a514a4b62       kube-scheduler-test-preload-755077            kube-system
	aace3bb431811       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   20 seconds ago      Running             kube-apiserver            1                   baafb01ba24da       kube-apiserver-test-preload-755077            kube-system
	463573787d932       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   20 seconds ago      Running             kube-controller-manager   1                   b4c85b400606e       kube-controller-manager-test-preload-755077   kube-system
	
	
	==> coredns [d15ae53cbce5625d4b8edcfa1b0dd8e8e7ce6730d7b19f7124ba5dcac05d3599] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39534 - 47266 "HINFO IN 1727799546755136300.7115740443949662706. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.051744906s
	
	
	==> describe nodes <==
	Name:               test-preload-755077
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-755077
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45
	                    minikube.k8s.io/name=test-preload-755077
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_05T06_58_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 05 Dec 2025 06:58:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-755077
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 05 Dec 2025 06:59:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 05 Dec 2025 06:59:20 +0000   Fri, 05 Dec 2025 06:58:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 05 Dec 2025 06:59:20 +0000   Fri, 05 Dec 2025 06:58:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 05 Dec 2025 06:59:20 +0000   Fri, 05 Dec 2025 06:58:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 05 Dec 2025 06:59:20 +0000   Fri, 05 Dec 2025 06:59:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.198
	  Hostname:    test-preload-755077
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 e263e86fe33041a7bdab1b93aa8d904c
	  System UUID:                e263e86f-e330-41a7-bdab-1b93aa8d904c
	  Boot ID:                    088b7786-4caf-4c78-bb62-54ae968cfb2f
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-67p6x                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     64s
	  kube-system                 etcd-test-preload-755077                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         70s
	  kube-system                 kube-apiserver-test-preload-755077             250m (12%)    0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 kube-controller-manager-test-preload-755077    200m (10%)    0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 kube-proxy-ctlgj                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kube-scheduler-test-preload-755077             100m (5%)     0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 63s                kube-proxy       
	  Normal   Starting                 16s                kube-proxy       
	  Normal   NodeHasSufficientMemory  70s                kubelet          Node test-preload-755077 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  70s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    70s                kubelet          Node test-preload-755077 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s                kubelet          Node test-preload-755077 status is now: NodeHasSufficientPID
	  Normal   Starting                 70s                kubelet          Starting kubelet.
	  Normal   NodeReady                69s                kubelet          Node test-preload-755077 status is now: NodeReady
	  Normal   RegisteredNode           66s                node-controller  Node test-preload-755077 event: Registered Node test-preload-755077 in Controller
	  Normal   Starting                 22s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node test-preload-755077 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node test-preload-755077 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node test-preload-755077 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 18s                kubelet          Node test-preload-755077 has been rebooted, boot id: 088b7786-4caf-4c78-bb62-54ae968cfb2f
	  Normal   RegisteredNode           15s                node-controller  Node test-preload-755077 event: Registered Node test-preload-755077 in Controller
	
	
	==> dmesg <==
	[Dec 5 06:58] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001142] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005410] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.983786] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec 5 06:59] kauditd_printk_skb: 88 callbacks suppressed
	[  +5.602376] kauditd_printk_skb: 196 callbacks suppressed
	[  +0.000057] kauditd_printk_skb: 128 callbacks suppressed
	[  +7.198410] kauditd_printk_skb: 65 callbacks suppressed
	
	
	==> etcd [0ba6ceb13a50115b472f6f8771865da38d2e7ac96e99909cc405bde24246726f] <==
	{"level":"warn","ts":"2025-12-05T06:59:08.969445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:59:08.991437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:59:09.003940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:59:09.023063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:59:09.039791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:59:09.059058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:59:09.061035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:59:09.082932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:59:09.098920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:59:09.108966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:59:09.128843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:59:09.135310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:59:09.148750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:59:09.161510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:59:09.173912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:59:09.189467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:59:09.210977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:59:09.231496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:59:09.249710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:59:09.260538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:59:09.288299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:59:09.321833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:59:09.333291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:59:09.342341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:59:09.396354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36528","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 06:59:28 up 0 min,  0 users,  load average: 1.31, 0.36, 0.12
	Linux test-preload-755077 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [aace3bb431811c670ff0069480a874a9104b6757f32c8772e1a35a91d3272e9b] <==
	I1205 06:59:10.274936       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1205 06:59:10.276002       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1205 06:59:10.285929       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1205 06:59:10.285970       1 policy_source.go:240] refreshing policies
	I1205 06:59:10.286542       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1205 06:59:10.286667       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1205 06:59:10.286842       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1205 06:59:10.287028       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1205 06:59:10.287568       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1205 06:59:10.287662       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1205 06:59:10.291054       1 aggregator.go:171] initial CRD sync complete...
	I1205 06:59:10.291133       1 autoregister_controller.go:144] Starting autoregister controller
	I1205 06:59:10.291153       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1205 06:59:10.291159       1 cache.go:39] Caches are synced for autoregister controller
	I1205 06:59:10.305612       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1205 06:59:10.327443       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1205 06:59:11.163679       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 06:59:11.208889       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1205 06:59:11.905438       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1205 06:59:11.945487       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1205 06:59:11.981566       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 06:59:11.993986       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 06:59:13.650691       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1205 06:59:13.939135       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 06:59:13.988722       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [463573787d93295d3414cf89078e8d2c15e95fb59b515184a9ce69d3166250f8] <==
	I1205 06:59:13.641097       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1205 06:59:13.641638       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1205 06:59:13.641812       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1205 06:59:13.641910       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1205 06:59:13.641928       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1205 06:59:13.641949       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1205 06:59:13.646081       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1205 06:59:13.646291       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1205 06:59:13.646699       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1205 06:59:13.648028       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1205 06:59:13.649379       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1205 06:59:13.651321       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1205 06:59:13.658596       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1205 06:59:13.685040       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1205 06:59:13.686275       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1205 06:59:13.686330       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1205 06:59:13.686362       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1205 06:59:13.686452       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1205 06:59:13.687670       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1205 06:59:13.688912       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1205 06:59:13.691275       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1205 06:59:13.692535       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1205 06:59:13.698795       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1205 06:59:13.698917       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1205 06:59:23.638606       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a0eb26d8a947be3c80d73bbe7a0985a1298f68623a3e8fd645f5464e545899a5] <==
	I1205 06:59:11.854476       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1205 06:59:11.955525       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1205 06:59:11.955575       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.198"]
	E1205 06:59:11.955658       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 06:59:12.013472       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1205 06:59:12.013523       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 06:59:12.013548       1 server_linux.go:132] "Using iptables Proxier"
	I1205 06:59:12.022543       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 06:59:12.022936       1 server.go:527] "Version info" version="v1.34.2"
	I1205 06:59:12.022960       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 06:59:12.028709       1 config.go:200] "Starting service config controller"
	I1205 06:59:12.028720       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1205 06:59:12.028736       1 config.go:106] "Starting endpoint slice config controller"
	I1205 06:59:12.028740       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1205 06:59:12.028750       1 config.go:403] "Starting serviceCIDR config controller"
	I1205 06:59:12.028754       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1205 06:59:12.029415       1 config.go:309] "Starting node config controller"
	I1205 06:59:12.029461       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1205 06:59:12.029478       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1205 06:59:12.129586       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1205 06:59:12.129716       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1205 06:59:12.129751       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1f623b814708f189a004045eaf417fcc829a6987724f614bc6877a54f88a6026] <==
	I1205 06:59:09.175877       1 serving.go:386] Generated self-signed cert in-memory
	I1205 06:59:10.562738       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1205 06:59:10.562774       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 06:59:10.570385       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1205 06:59:10.570482       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1205 06:59:10.570806       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 06:59:10.571348       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 06:59:10.571093       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1205 06:59:10.571152       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1205 06:59:10.571587       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1205 06:59:10.571143       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1205 06:59:10.671910       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1205 06:59:10.672417       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 06:59:10.672604       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Dec 05 06:59:10 test-preload-755077 kubelet[1181]: I1205 06:59:10.400027    1181 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-755077"
	Dec 05 06:59:10 test-preload-755077 kubelet[1181]: E1205 06:59:10.410988    1181 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-755077\" already exists" pod="kube-system/kube-apiserver-test-preload-755077"
	Dec 05 06:59:10 test-preload-755077 kubelet[1181]: I1205 06:59:10.411014    1181 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-755077"
	Dec 05 06:59:10 test-preload-755077 kubelet[1181]: E1205 06:59:10.423037    1181 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-755077\" already exists" pod="kube-system/kube-controller-manager-test-preload-755077"
	Dec 05 06:59:11 test-preload-755077 kubelet[1181]: I1205 06:59:11.124904    1181 apiserver.go:52] "Watching apiserver"
	Dec 05 06:59:11 test-preload-755077 kubelet[1181]: E1205 06:59:11.132814    1181 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-67p6x" podUID="21f85926-5250-4dc9-aeae-bc72444916fa"
	Dec 05 06:59:11 test-preload-755077 kubelet[1181]: I1205 06:59:11.165921    1181 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 05 06:59:11 test-preload-755077 kubelet[1181]: I1205 06:59:11.200511    1181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca360ad9-341d-4d31-8cf0-33aae295b3a9-lib-modules\") pod \"kube-proxy-ctlgj\" (UID: \"ca360ad9-341d-4d31-8cf0-33aae295b3a9\") " pod="kube-system/kube-proxy-ctlgj"
	Dec 05 06:59:11 test-preload-755077 kubelet[1181]: I1205 06:59:11.200574    1181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0194a935-5452-4982-9002-9673ebc2f246-tmp\") pod \"storage-provisioner\" (UID: \"0194a935-5452-4982-9002-9673ebc2f246\") " pod="kube-system/storage-provisioner"
	Dec 05 06:59:11 test-preload-755077 kubelet[1181]: I1205 06:59:11.200588    1181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca360ad9-341d-4d31-8cf0-33aae295b3a9-xtables-lock\") pod \"kube-proxy-ctlgj\" (UID: \"ca360ad9-341d-4d31-8cf0-33aae295b3a9\") " pod="kube-system/kube-proxy-ctlgj"
	Dec 05 06:59:11 test-preload-755077 kubelet[1181]: E1205 06:59:11.201344    1181 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 05 06:59:11 test-preload-755077 kubelet[1181]: E1205 06:59:11.201416    1181 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21f85926-5250-4dc9-aeae-bc72444916fa-config-volume podName:21f85926-5250-4dc9-aeae-bc72444916fa nodeName:}" failed. No retries permitted until 2025-12-05 06:59:11.70139924 +0000 UTC m=+5.676367937 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/21f85926-5250-4dc9-aeae-bc72444916fa-config-volume") pod "coredns-66bc5c9577-67p6x" (UID: "21f85926-5250-4dc9-aeae-bc72444916fa") : object "kube-system"/"coredns" not registered
	Dec 05 06:59:11 test-preload-755077 kubelet[1181]: E1205 06:59:11.220904    1181 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Dec 05 06:59:11 test-preload-755077 kubelet[1181]: E1205 06:59:11.704739    1181 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 05 06:59:11 test-preload-755077 kubelet[1181]: E1205 06:59:11.705567    1181 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21f85926-5250-4dc9-aeae-bc72444916fa-config-volume podName:21f85926-5250-4dc9-aeae-bc72444916fa nodeName:}" failed. No retries permitted until 2025-12-05 06:59:12.705356991 +0000 UTC m=+6.680325677 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/21f85926-5250-4dc9-aeae-bc72444916fa-config-volume") pod "coredns-66bc5c9577-67p6x" (UID: "21f85926-5250-4dc9-aeae-bc72444916fa") : object "kube-system"/"coredns" not registered
	Dec 05 06:59:12 test-preload-755077 kubelet[1181]: E1205 06:59:12.712078    1181 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 05 06:59:12 test-preload-755077 kubelet[1181]: E1205 06:59:12.712207    1181 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21f85926-5250-4dc9-aeae-bc72444916fa-config-volume podName:21f85926-5250-4dc9-aeae-bc72444916fa nodeName:}" failed. No retries permitted until 2025-12-05 06:59:14.712191974 +0000 UTC m=+8.687160660 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/21f85926-5250-4dc9-aeae-bc72444916fa-config-volume") pod "coredns-66bc5c9577-67p6x" (UID: "21f85926-5250-4dc9-aeae-bc72444916fa") : object "kube-system"/"coredns" not registered
	Dec 05 06:59:13 test-preload-755077 kubelet[1181]: E1205 06:59:13.178691    1181 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-67p6x" podUID="21f85926-5250-4dc9-aeae-bc72444916fa"
	Dec 05 06:59:14 test-preload-755077 kubelet[1181]: E1205 06:59:14.727164    1181 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 05 06:59:14 test-preload-755077 kubelet[1181]: E1205 06:59:14.727317    1181 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21f85926-5250-4dc9-aeae-bc72444916fa-config-volume podName:21f85926-5250-4dc9-aeae-bc72444916fa nodeName:}" failed. No retries permitted until 2025-12-05 06:59:18.727296392 +0000 UTC m=+12.702265089 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/21f85926-5250-4dc9-aeae-bc72444916fa-config-volume") pod "coredns-66bc5c9577-67p6x" (UID: "21f85926-5250-4dc9-aeae-bc72444916fa") : object "kube-system"/"coredns" not registered
	Dec 05 06:59:15 test-preload-755077 kubelet[1181]: E1205 06:59:15.178358    1181 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-67p6x" podUID="21f85926-5250-4dc9-aeae-bc72444916fa"
	Dec 05 06:59:16 test-preload-755077 kubelet[1181]: E1205 06:59:16.223289    1181 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764917956222715980 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 05 06:59:16 test-preload-755077 kubelet[1181]: E1205 06:59:16.223324    1181 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764917956222715980 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 05 06:59:26 test-preload-755077 kubelet[1181]: E1205 06:59:26.224632    1181 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764917966224204238 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 05 06:59:26 test-preload-755077 kubelet[1181]: E1205 06:59:26.224674    1181 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764917966224204238 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	
	
	==> storage-provisioner [83396e3c7e5590c8edb194ba0e97541a5dbb841e7a44ff3fa0d55fa3d7dcd78c] <==
	I1205 06:59:11.772888       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-755077 -n test-preload-755077
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-755077 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-755077" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-755077
--- FAIL: TestPreload (117.06s)

                                                
                                    

Test pass (381/437)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.85
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.15
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.2/json-events 3.33
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.07
18 TestDownloadOnly/v1.34.2/DeleteAll 0.15
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-beta.0/json-events 3.02
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.15
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.15
30 TestBinaryMirror 0.65
31 TestOffline 80.98
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 196.87
40 TestAddons/serial/GCPAuth/Namespaces 0.14
41 TestAddons/serial/GCPAuth/FakeCredentials 11.53
44 TestAddons/parallel/Registry 18.25
45 TestAddons/parallel/RegistryCreds 0.71
47 TestAddons/parallel/InspektorGadget 11.81
48 TestAddons/parallel/MetricsServer 5.74
50 TestAddons/parallel/CSI 99
51 TestAddons/parallel/Headlamp 56.21
52 TestAddons/parallel/CloudSpanner 6.54
53 TestAddons/parallel/LocalPath 57.77
54 TestAddons/parallel/NvidiaDevicePlugin 6.62
55 TestAddons/parallel/Yakd 12.08
57 TestAddons/StoppedEnableDisable 89.64
58 TestCertOptions 42.48
61 TestForceSystemdFlag 56.68
62 TestForceSystemdEnv 46.93
67 TestErrorSpam/setup 34.88
68 TestErrorSpam/start 0.32
69 TestErrorSpam/status 0.66
70 TestErrorSpam/pause 1.51
71 TestErrorSpam/unpause 1.67
72 TestErrorSpam/stop 4.97
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 79.42
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 38.34
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.08
83 TestFunctional/serial/CacheCmd/cache/add_remote 5.23
84 TestFunctional/serial/CacheCmd/cache/add_local 1.9
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.46
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
92 TestFunctional/serial/ExtraConfig 30.53
93 TestFunctional/serial/ComponentHealth 0.06
94 TestFunctional/serial/LogsCmd 1.29
95 TestFunctional/serial/LogsFileCmd 1.25
96 TestFunctional/serial/InvalidService 4.38
98 TestFunctional/parallel/ConfigCmd 0.41
99 TestFunctional/parallel/DashboardCmd 44.76
100 TestFunctional/parallel/DryRun 0.24
101 TestFunctional/parallel/InternationalLanguage 0.12
102 TestFunctional/parallel/StatusCmd 0.68
106 TestFunctional/parallel/ServiceCmdConnect 10.44
107 TestFunctional/parallel/AddonsCmd 0.16
108 TestFunctional/parallel/PersistentVolumeClaim 49.24
110 TestFunctional/parallel/SSHCmd 0.35
111 TestFunctional/parallel/CpCmd 1.14
112 TestFunctional/parallel/MySQL 29.13
113 TestFunctional/parallel/FileSync 0.16
114 TestFunctional/parallel/CertSync 1.02
118 TestFunctional/parallel/NodeLabels 0.06
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.32
122 TestFunctional/parallel/License 0.22
123 TestFunctional/parallel/ServiceCmd/DeployApp 8.21
133 TestFunctional/parallel/Version/short 0.06
134 TestFunctional/parallel/Version/components 0.55
136 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
137 TestFunctional/parallel/ImageCommands/ImageListJson 0.66
138 TestFunctional/parallel/ImageCommands/ImageListYaml 1.6
140 TestFunctional/parallel/ImageCommands/Setup 1.55
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.14
142 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.8
143 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.52
144 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.48
145 TestFunctional/parallel/ImageCommands/ImageRemove 0.43
146 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.62
147 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.56
148 TestFunctional/parallel/ServiceCmd/List 0.25
149 TestFunctional/parallel/ServiceCmd/JSONOutput 0.23
150 TestFunctional/parallel/ServiceCmd/HTTPS 0.24
151 TestFunctional/parallel/ServiceCmd/Format 0.24
152 TestFunctional/parallel/ServiceCmd/URL 0.23
153 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
154 TestFunctional/parallel/MountCmd/any-port 15.2
155 TestFunctional/parallel/ProfileCmd/profile_list 0.29
156 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
157 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
158 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.07
159 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
160 TestFunctional/parallel/MountCmd/specific-port 1.31
161 TestFunctional/parallel/MountCmd/VerifyCleanup 1.27
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
169 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 85.35
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 38.71
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.04
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.14
176 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 2.96
177 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.85
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.18
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.5
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.12
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.11
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 34.5
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.06
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.28
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.28
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.53
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.43
192 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 32.84
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.25
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.13
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.65
199 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 9.55
200 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.16
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 44.27
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.36
204 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.17
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 32.31
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.19
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.29
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.07
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.43
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.25
216 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 9.21
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.31
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.3
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.33
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 7.16
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.23
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.23
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.31
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.8
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.42
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.28
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.51
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.21
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.45
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.48
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.25
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 4.31
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.69
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 3.98
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.24
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.07
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.07
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.07
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.85
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.64
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.59
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.58
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 2.77
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.6
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.03
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
261 TestMultiControlPlane/serial/StartCluster 198.58
262 TestMultiControlPlane/serial/DeployApp 6.7
263 TestMultiControlPlane/serial/PingHostFromPods 1.29
264 TestMultiControlPlane/serial/AddWorkerNode 46.18
265 TestMultiControlPlane/serial/NodeLabels 0.07
266 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.65
267 TestMultiControlPlane/serial/CopyFile 10.42
268 TestMultiControlPlane/serial/StopSecondaryNode 83.61
269 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.5
270 TestMultiControlPlane/serial/RestartSecondaryNode 35.15
271 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.69
272 TestMultiControlPlane/serial/RestartClusterKeepsNodes 333.86
273 TestMultiControlPlane/serial/DeleteSecondaryNode 17.94
274 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.49
275 TestMultiControlPlane/serial/StopCluster 259.86
276 TestMultiControlPlane/serial/RestartCluster 105.17
277 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.5
278 TestMultiControlPlane/serial/AddSecondaryNode 71.56
279 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.66
284 TestJSONOutput/start/Command 74.57
285 TestJSONOutput/start/Audit 0
287 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
288 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
290 TestJSONOutput/pause/Command 0.7
291 TestJSONOutput/pause/Audit 0
293 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/unpause/Command 0.61
297 TestJSONOutput/unpause/Audit 0
299 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/stop/Command 8.09
303 TestJSONOutput/stop/Audit 0
305 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
307 TestErrorJSONOutput 0.22
312 TestMainNoArgs 0.06
313 TestMinikubeProfile 76.15
316 TestMountStart/serial/StartWithMountFirst 20.23
317 TestMountStart/serial/VerifyMountFirst 0.3
318 TestMountStart/serial/StartWithMountSecond 19.52
319 TestMountStart/serial/VerifyMountSecond 0.3
320 TestMountStart/serial/DeleteFirst 0.68
321 TestMountStart/serial/VerifyMountPostDelete 0.3
322 TestMountStart/serial/Stop 1.24
323 TestMountStart/serial/RestartStopped 17.92
324 TestMountStart/serial/VerifyMountPostStop 0.3
327 TestMultiNode/serial/FreshStart2Nodes 93.21
328 TestMultiNode/serial/DeployApp2Nodes 5.82
329 TestMultiNode/serial/PingHostFrom2Pods 0.82
330 TestMultiNode/serial/AddNode 41.98
331 TestMultiNode/serial/MultiNodeLabels 0.06
332 TestMultiNode/serial/ProfileList 0.44
333 TestMultiNode/serial/CopyFile 5.82
334 TestMultiNode/serial/StopNode 2.15
335 TestMultiNode/serial/StartAfterStop 40.03
336 TestMultiNode/serial/RestartKeepsNodes 294.8
337 TestMultiNode/serial/DeleteNode 2.58
338 TestMultiNode/serial/StopMultiNode 157.81
339 TestMultiNode/serial/RestartMultiNode 82.08
340 TestMultiNode/serial/ValidateNameConflict 39.04
347 TestScheduledStopUnix 110.31
351 TestRunningBinaryUpgrade 390.71
353 TestKubernetesUpgrade 503.18
356 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
357 TestNoKubernetes/serial/StartWithK8s 75.48
358 TestNoKubernetes/serial/StartWithStopK8s 25.33
367 TestPause/serial/Start 97.77
368 TestNoKubernetes/serial/Start 45.33
369 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
370 TestNoKubernetes/serial/VerifyK8sNotRunning 0.16
371 TestNoKubernetes/serial/ProfileList 16.16
372 TestNoKubernetes/serial/Stop 1.27
373 TestNoKubernetes/serial/StartNoArgs 17.21
374 TestPause/serial/SecondStartNoReconfiguration 33.58
375 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.15
376 TestPause/serial/Pause 1
377 TestPause/serial/VerifyStatus 0.22
378 TestPause/serial/Unpause 0.74
379 TestPause/serial/PauseAgain 0.83
380 TestPause/serial/DeletePaused 0.83
381 TestPause/serial/VerifyDeletedResources 0.49
389 TestNetworkPlugins/group/false 3.8
393 TestISOImage/Setup 19.69
395 TestISOImage/Binaries/crictl 0.19
396 TestISOImage/Binaries/curl 0.18
397 TestISOImage/Binaries/docker 0.17
398 TestISOImage/Binaries/git 0.18
399 TestISOImage/Binaries/iptables 0.18
400 TestISOImage/Binaries/podman 0.18
401 TestISOImage/Binaries/rsync 0.19
402 TestISOImage/Binaries/socat 0.17
403 TestISOImage/Binaries/wget 0.18
404 TestISOImage/Binaries/VBoxControl 0.17
405 TestISOImage/Binaries/VBoxService 0.18
406 TestStoppedBinaryUpgrade/Setup 0.68
407 TestStoppedBinaryUpgrade/Upgrade 72.06
408 TestStoppedBinaryUpgrade/MinikubeLogs 1.14
410 TestStartStop/group/old-k8s-version/serial/FirstStart 104.34
412 TestStartStop/group/no-preload/serial/FirstStart 89.8
413 TestStartStop/group/old-k8s-version/serial/DeployApp 10.34
415 TestStartStop/group/embed-certs/serial/FirstStart 82.74
416 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.16
417 TestStartStop/group/old-k8s-version/serial/Stop 85.35
418 TestStartStop/group/no-preload/serial/DeployApp 11.29
419 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
420 TestStartStop/group/no-preload/serial/Stop 85.81
421 TestStartStop/group/embed-certs/serial/DeployApp 11.28
422 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.15
423 TestStartStop/group/old-k8s-version/serial/SecondStart 44.6
424 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.95
425 TestStartStop/group/embed-certs/serial/Stop 90.68
426 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
427 TestStartStop/group/no-preload/serial/SecondStart 55.7
428 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 10.01
429 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
430 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.2
431 TestStartStop/group/old-k8s-version/serial/Pause 2.62
433 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 56.32
434 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
435 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
436 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
437 TestStartStop/group/no-preload/serial/Pause 3.41
439 TestStartStop/group/newest-cni/serial/FirstStart 55.28
440 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
441 TestStartStop/group/embed-certs/serial/SecondStart 58.29
442 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.33
443 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.19
444 TestStartStop/group/default-k8s-diff-port/serial/Stop 70.5
445 TestStartStop/group/newest-cni/serial/DeployApp 0
446 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.07
447 TestStartStop/group/newest-cni/serial/Stop 87.21
448 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
449 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
450 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.21
451 TestStartStop/group/embed-certs/serial/Pause 2.46
452 TestNetworkPlugins/group/auto/Start 55.6
453 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.15
454 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 107.22
455 TestNetworkPlugins/group/auto/KubeletFlags 0.43
456 TestNetworkPlugins/group/auto/NetCatPod 10.95
457 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.14
458 TestStartStop/group/newest-cni/serial/SecondStart 45.96
459 TestNetworkPlugins/group/auto/DNS 0.14
460 TestNetworkPlugins/group/auto/Localhost 0.14
461 TestNetworkPlugins/group/auto/HairPin 0.13
462 TestNetworkPlugins/group/kindnet/Start 58.87
463 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
464 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
465 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.38
466 TestStartStop/group/newest-cni/serial/Pause 3.54
467 TestNetworkPlugins/group/calico/Start 74.09
468 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
469 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
470 TestNetworkPlugins/group/kindnet/KubeletFlags 0.18
471 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
472 TestNetworkPlugins/group/kindnet/NetCatPod 10.31
473 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.19
474 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.57
475 TestNetworkPlugins/group/custom-flannel/Start 67.57
476 TestNetworkPlugins/group/kindnet/DNS 0.15
477 TestNetworkPlugins/group/kindnet/Localhost 0.12
478 TestNetworkPlugins/group/kindnet/HairPin 0.13
479 TestNetworkPlugins/group/enable-default-cni/Start 85.23
480 TestNetworkPlugins/group/calico/ControllerPod 6.06
481 TestNetworkPlugins/group/calico/KubeletFlags 0.19
482 TestNetworkPlugins/group/calico/NetCatPod 12.26
483 TestNetworkPlugins/group/calico/DNS 0.18
484 TestNetworkPlugins/group/calico/Localhost 0.13
485 TestNetworkPlugins/group/calico/HairPin 0.14
486 TestNetworkPlugins/group/flannel/Start 68.77
487 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
488 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.29
489 TestNetworkPlugins/group/custom-flannel/DNS 0.18
490 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
491 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
492 TestNetworkPlugins/group/bridge/Start 78.66
493 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
494 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.29
495 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
496 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
497 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
499 TestISOImage/PersistentMounts//data 0.17
500 TestISOImage/PersistentMounts//var/lib/docker 0.17
501 TestISOImage/PersistentMounts//var/lib/cni 0.18
502 TestISOImage/PersistentMounts//var/lib/kubelet 0.17
503 TestISOImage/PersistentMounts//var/lib/minikube 0.18
504 TestISOImage/PersistentMounts//var/lib/toolbox 0.18
505 TestISOImage/PersistentMounts//var/lib/boot2docker 0.18
506 TestISOImage/VersionJSON 0.18
507 TestISOImage/eBPFSupport 0.17
508 TestNetworkPlugins/group/flannel/ControllerPod 6.01
509 TestNetworkPlugins/group/flannel/KubeletFlags 0.19
510 TestNetworkPlugins/group/flannel/NetCatPod 9.29
511 TestNetworkPlugins/group/flannel/DNS 0.15
512 TestNetworkPlugins/group/flannel/Localhost 0.13
513 TestNetworkPlugins/group/flannel/HairPin 0.13
514 TestNetworkPlugins/group/bridge/KubeletFlags 0.17
515 TestNetworkPlugins/group/bridge/NetCatPod 9.22
516 TestNetworkPlugins/group/bridge/DNS 0.13
517 TestNetworkPlugins/group/bridge/Localhost 0.11
518 TestNetworkPlugins/group/bridge/HairPin 0.11
x
+
TestDownloadOnly/v1.28.0/json-events (7.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-818301 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-818301 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.853928125s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1205 06:05:01.729195   16702 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1205 06:05:01.729281   16702 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-12744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-818301
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-818301: exit status 85 (75.119891ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-818301 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-818301 │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:04:53
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:04:53.925977   16713 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:04:53.926271   16713 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:04:53.926281   16713 out.go:374] Setting ErrFile to fd 2...
	I1205 06:04:53.926289   16713 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:04:53.926475   16713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12744/.minikube/bin
	W1205 06:04:53.926609   16713 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21997-12744/.minikube/config/config.json: open /home/jenkins/minikube-integration/21997-12744/.minikube/config/config.json: no such file or directory
	I1205 06:04:53.927125   16713 out.go:368] Setting JSON to true
	I1205 06:04:53.927996   16713 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2839,"bootTime":1764911855,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 06:04:53.928045   16713 start.go:143] virtualization: kvm guest
	I1205 06:04:53.931973   16713 out.go:99] [download-only-818301] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1205 06:04:53.932096   16713 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21997-12744/.minikube/cache/preloaded-tarball: no such file or directory
	I1205 06:04:53.932153   16713 notify.go:221] Checking for updates...
	I1205 06:04:53.933295   16713 out.go:171] MINIKUBE_LOCATION=21997
	I1205 06:04:53.934526   16713 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:04:53.935815   16713 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-12744/kubeconfig
	I1205 06:04:53.937174   16713 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12744/.minikube
	I1205 06:04:53.938374   16713 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1205 06:04:53.940569   16713 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 06:04:53.940844   16713 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:04:54.438985   16713 out.go:99] Using the kvm2 driver based on user configuration
	I1205 06:04:54.439018   16713 start.go:309] selected driver: kvm2
	I1205 06:04:54.439026   16713 start.go:927] validating driver "kvm2" against <nil>
	I1205 06:04:54.439356   16713 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 06:04:54.439895   16713 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1205 06:04:54.440064   16713 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 06:04:54.440095   16713 cni.go:84] Creating CNI manager for ""
	I1205 06:04:54.440152   16713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 06:04:54.440164   16713 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 06:04:54.440218   16713 start.go:353] cluster config:
	{Name:download-only-818301 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-818301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:04:54.440412   16713 iso.go:125] acquiring lock: {Name:mk8940d2199650f8674488213bff178b8d82a626 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:04:54.441861   16713 out.go:99] Downloading VM boot image ...
	I1205 06:04:54.441913   16713 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21997-12744/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1205 06:04:57.462566   16713 out.go:99] Starting "download-only-818301" primary control-plane node in "download-only-818301" cluster
	I1205 06:04:57.462613   16713 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1205 06:04:57.479993   16713 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1205 06:04:57.480023   16713 cache.go:65] Caching tarball of preloaded images
	I1205 06:04:57.480185   16713 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1205 06:04:57.481796   16713 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1205 06:04:57.481814   16713 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1205 06:04:57.504678   16713 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1205 06:04:57.504810   16713 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21997-12744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-818301 host does not exist
	  To start a cluster, run: "minikube start -p download-only-818301"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-818301
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (3.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-566761 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-566761 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.325233658s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (3.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1205 06:05:05.428191   16702 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1205 06:05:05.428226   16702 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-12744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-566761
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-566761: exit status 85 (71.503657ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-818301 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-818301 │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │ 05 Dec 25 06:05 UTC │
	│ delete  │ -p download-only-818301                                                                                                                                                 │ download-only-818301 │ jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │ 05 Dec 25 06:05 UTC │
	│ start   │ -o=json --download-only -p download-only-566761 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-566761 │ jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:05:02
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:05:02.153377   16926 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:05:02.153485   16926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:05:02.153495   16926 out.go:374] Setting ErrFile to fd 2...
	I1205 06:05:02.153502   16926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:05:02.153748   16926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12744/.minikube/bin
	I1205 06:05:02.154239   16926 out.go:368] Setting JSON to true
	I1205 06:05:02.155104   16926 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2847,"bootTime":1764911855,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 06:05:02.155162   16926 start.go:143] virtualization: kvm guest
	I1205 06:05:02.157106   16926 out.go:99] [download-only-566761] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 06:05:02.157294   16926 notify.go:221] Checking for updates...
	I1205 06:05:02.158752   16926 out.go:171] MINIKUBE_LOCATION=21997
	I1205 06:05:02.160120   16926 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:05:02.161412   16926 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-12744/kubeconfig
	I1205 06:05:02.162655   16926 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12744/.minikube
	I1205 06:05:02.164411   16926 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-566761 host does not exist
	  To start a cluster, run: "minikube start -p download-only-566761"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-566761
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (3.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-826602 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-826602 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.023640345s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (3.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
--- PASS: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
--- PASS: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-826602
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-826602: exit status 85 (72.151087ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-818301 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-818301 │ jenkins │ v1.37.0 │ 05 Dec 25 06:04 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │ 05 Dec 25 06:05 UTC │
	│ delete  │ -p download-only-818301                                                                                                                                                        │ download-only-818301 │ jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │ 05 Dec 25 06:05 UTC │
	│ start   │ -o=json --download-only -p download-only-566761 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-566761 │ jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │ 05 Dec 25 06:05 UTC │
	│ delete  │ -p download-only-566761                                                                                                                                                        │ download-only-566761 │ jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │ 05 Dec 25 06:05 UTC │
	│ start   │ -o=json --download-only -p download-only-826602 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-826602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:05 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:05:05
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:05:05.850536   17104 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:05:05.850649   17104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:05:05.850655   17104 out.go:374] Setting ErrFile to fd 2...
	I1205 06:05:05.850659   17104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:05:05.850866   17104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12744/.minikube/bin
	I1205 06:05:05.851309   17104 out.go:368] Setting JSON to true
	I1205 06:05:05.852123   17104 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2851,"bootTime":1764911855,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 06:05:05.852174   17104 start.go:143] virtualization: kvm guest
	I1205 06:05:05.853918   17104 out.go:99] [download-only-826602] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 06:05:05.854077   17104 notify.go:221] Checking for updates...
	I1205 06:05:05.855191   17104 out.go:171] MINIKUBE_LOCATION=21997
	I1205 06:05:05.856504   17104 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:05:05.857996   17104 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-12744/kubeconfig
	I1205 06:05:05.862206   17104 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12744/.minikube
	I1205 06:05:05.863571   17104 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-826602 host does not exist
	  To start a cluster, run: "minikube start -p download-only-826602"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-826602
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I1205 06:05:09.691485   16702 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-869778 --alsologtostderr --binary-mirror http://127.0.0.1:35975 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-869778" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-869778
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestOffline (80.98s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-119674 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-119674 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m19.512606128s)
helpers_test.go:175: Cleaning up "offline-crio-119674" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-119674
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-119674: (1.465277399s)
--- PASS: TestOffline (80.98s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-704432
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-704432: exit status 85 (66.858431ms)

                                                
                                                
-- stdout --
	* Profile "addons-704432" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-704432"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-704432
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-704432: exit status 85 (67.37595ms)

                                                
                                                
-- stdout --
	* Profile "addons-704432" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-704432"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (196.87s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-704432 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-704432 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m16.87040257s)
--- PASS: TestAddons/Setup (196.87s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-704432 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-704432 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.53s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-704432 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-704432 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [101b2566-3243-4868-8046-5629dae282ae] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [101b2566-3243-4868-8046-5629dae282ae] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.003610444s
addons_test.go:694: (dbg) Run:  kubectl --context addons-704432 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-704432 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-704432 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.53s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 6.352727ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-qpbwd" [e116af95-1fdd-4c9e-91e9-f32f80235739] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003609085s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-c67pj" [290baf5e-37de-422d-aae1-4205f41f6d47] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003730288s
addons_test.go:392: (dbg) Run:  kubectl --context addons-704432 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-704432 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-704432 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.341830513s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-704432 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-704432 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.25s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.71s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.849891ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-704432
addons_test.go:332: (dbg) Run:  kubectl --context addons-704432 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-704432 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.71s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.81s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
I1205 06:08:47.560801   16702 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-8jpfb" [55efb349-d167-4892-9751-da5ac8315a26] Running
I1205 06:08:47.569480   16702 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1205 06:08:47.569506   16702 kapi.go:107] duration metric: took 8.722563ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003622911s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-704432 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-704432 addons disable inspektor-gadget --alsologtostderr -v=1: (5.805215879s)
--- PASS: TestAddons/parallel/InspektorGadget (11.81s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.74s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 15.835172ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-ncg69" [a9fe69b5-73c9-4209-8f08-e5e72c7f16ed] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005587719s
addons_test.go:463: (dbg) Run:  kubectl --context addons-704432 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-704432 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.74s)

                                                
                                    
x
+
TestAddons/parallel/CSI (99s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:549: csi-hostpath-driver pods stabilized in 8.734033ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-704432 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-704432 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-704432 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-704432 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-704432 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-704432 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-704432 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [aeb9f3ba-f6aa-45a9-88e9-ee135d3bacb8] Pending
helpers_test.go:352: "task-pv-pod" [aeb9f3ba-f6aa-45a9-88e9-ee135d3bacb8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [aeb9f3ba-f6aa-45a9-88e9-ee135d3bacb8] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.008850555s
addons_test.go:572: (dbg) Run:  kubectl --context addons-704432 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-704432 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
2025/12/05 06:09:05 [DEBUG] GET http://192.168.39.31:5000
helpers_test.go:427: (dbg) Run:  kubectl --context addons-704432 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-704432 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-704432 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-704432 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-704432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-704432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-704432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-704432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-704432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-704432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-704432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-704432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-704432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-704432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-704432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-704432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-704432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-704432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-704432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-704432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-704432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-704432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-704432 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [c401aed5-8754-46f3-9040-540d9ad56eb3] Pending
helpers_test.go:352: "task-pv-pod-restore" [c401aed5-8754-46f3-9040-540d9ad56eb3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [c401aed5-8754-46f3-9040-540d9ad56eb3] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 53.004075154s
addons_test.go:614: (dbg) Run:  kubectl --context addons-704432 delete pod task-pv-pod-restore
addons_test.go:614: (dbg) Done: kubectl --context addons-704432 delete pod task-pv-pod-restore: (1.016279226s)
addons_test.go:618: (dbg) Run:  kubectl --context addons-704432 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-704432 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-704432 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-704432 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-704432 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.950886671s)
--- PASS: TestAddons/parallel/CSI (99.00s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (56.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-704432 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-xldsm" [7e85d7c3-139b-44e1-842c-ffd224d7e681] Pending
helpers_test.go:352: "headlamp-dfcdc64b-xldsm" [7e85d7c3-139b-44e1-842c-ffd224d7e681] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-xldsm" [7e85d7c3-139b-44e1-842c-ffd224d7e681] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-xldsm" [7e85d7c3-139b-44e1-842c-ffd224d7e681] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 55.005474579s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-704432 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (56.21s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-zm8lq" [52b81db3-2de5-426d-bd91-2ccc5c963b72] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004219909s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-704432 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.54s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.77s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-704432 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-704432 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-704432 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-704432 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-704432 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-704432 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-704432 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-704432 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-704432 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [51ba5874-7973-4820-9c41-608c0fbab05e] Pending
helpers_test.go:352: "test-local-path" [51ba5874-7973-4820-9c41-608c0fbab05e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [51ba5874-7973-4820-9c41-608c0fbab05e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [51ba5874-7973-4820-9c41-608c0fbab05e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 8.003173498s
addons_test.go:967: (dbg) Run:  kubectl --context addons-704432 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-704432 ssh "cat /opt/local-path-provisioner/pvc-dfd46569-e5e3-46ac-8dd9-36ab90471008_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-704432 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-704432 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-704432 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-704432 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.988524315s)
--- PASS: TestAddons/parallel/LocalPath (57.77s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.62s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-7bpgl" [e13f38d1-2164-4e32-9c93-9921cb031513] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004255859s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-704432 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.62s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-887c9" [32edda64-4933-42b2-baea-0d7a2f3fc58c] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005611039s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-704432 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-704432 addons disable yakd --alsologtostderr -v=1: (6.077496569s)
--- PASS: TestAddons/parallel/Yakd (12.08s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (89.64s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-704432
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-704432: (1m29.442331308s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-704432
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-704432
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-704432
--- PASS: TestAddons/StoppedEnableDisable (89.64s)

                                                
                                    
x
+
TestCertOptions (42.48s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-311446 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-311446 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (41.131206834s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-311446 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-311446 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-311446 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-311446" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-311446
--- PASS: TestCertOptions (42.48s)

                                                
                                    
x
+
TestForceSystemdFlag (56.68s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-301907 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1205 07:05:37.468953   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-895947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-301907 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (55.678524996s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-301907 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-301907" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-301907
--- PASS: TestForceSystemdFlag (56.68s)

                                                
                                    
x
+
TestForceSystemdEnv (46.93s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-434541 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-434541 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (46.117639052s)
helpers_test.go:175: Cleaning up "force-systemd-env-434541" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-434541
--- PASS: TestForceSystemdEnv (46.93s)

                                                
                                    
x
+
TestErrorSpam/setup (34.88s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-220744 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-220744 --driver=kvm2  --container-runtime=crio
E1205 06:13:27.909462   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:13:27.915881   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:13:27.927283   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:13:27.948761   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:13:27.990242   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:13:28.071768   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:13:28.233337   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:13:28.555121   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:13:29.197226   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:13:30.478900   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:13:33.041768   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:13:38.163891   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:13:48.405449   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-220744 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-220744 --driver=kvm2  --container-runtime=crio: (34.884656433s)
--- PASS: TestErrorSpam/setup (34.88s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-220744 --log_dir /tmp/nospam-220744 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-220744 --log_dir /tmp/nospam-220744 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-220744 --log_dir /tmp/nospam-220744 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-220744 --log_dir /tmp/nospam-220744 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-220744 --log_dir /tmp/nospam-220744 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-220744 --log_dir /tmp/nospam-220744 status
--- PASS: TestErrorSpam/status (0.66s)

                                                
                                    
x
+
TestErrorSpam/pause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-220744 --log_dir /tmp/nospam-220744 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-220744 --log_dir /tmp/nospam-220744 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-220744 --log_dir /tmp/nospam-220744 pause
--- PASS: TestErrorSpam/pause (1.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.67s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-220744 --log_dir /tmp/nospam-220744 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-220744 --log_dir /tmp/nospam-220744 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-220744 --log_dir /tmp/nospam-220744 unpause
--- PASS: TestErrorSpam/unpause (1.67s)

                                                
                                    
x
+
TestErrorSpam/stop (4.97s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-220744 --log_dir /tmp/nospam-220744 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-220744 --log_dir /tmp/nospam-220744 stop: (1.926186216s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-220744 --log_dir /tmp/nospam-220744 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-220744 --log_dir /tmp/nospam-220744 stop: (1.651428451s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-220744 --log_dir /tmp/nospam-220744 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-220744 --log_dir /tmp/nospam-220744 stop: (1.389025379s)
--- PASS: TestErrorSpam/stop (4.97s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21997-12744/.minikube/files/etc/test/nested/copy/16702/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.42s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-158571 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1205 06:14:08.887148   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:14:49.849829   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-158571 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m19.41888595s)
--- PASS: TestFunctional/serial/StartWithProxy (79.42s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.34s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1205 06:15:22.030284   16702 config.go:182] Loaded profile config "functional-158571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-158571 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-158571 --alsologtostderr -v=8: (38.343805959s)
functional_test.go:678: soft start took 38.344636021s for "functional-158571" cluster.
I1205 06:16:00.374401   16702 config.go:182] Loaded profile config "functional-158571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (38.34s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-158571 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-158571 cache add registry.k8s.io/pause:3.3: (3.247571609s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-158571 cache add registry.k8s.io/pause:latest: (1.00462897s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-158571 /tmp/TestFunctionalserialCacheCmdcacheadd_local1629792441/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 cache add minikube-local-cache-test:functional-158571
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-158571 cache add minikube-local-cache-test:functional-158571: (1.541828906s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 cache delete minikube-local-cache-test:functional-158571
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-158571
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-158571 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (172.100495ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 kubectl -- --context functional-158571 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-158571 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (30.53s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-158571 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1205 06:16:11.772220   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-158571 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (30.531704828s)
functional_test.go:776: restart took 30.531815708s for "functional-158571" cluster.
I1205 06:16:40.253968   16702 config.go:182] Loaded profile config "functional-158571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (30.53s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-158571 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-158571 logs: (1.293072834s)
--- PASS: TestFunctional/serial/LogsCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 logs --file /tmp/TestFunctionalserialLogsFileCmd3752379577/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-158571 logs --file /tmp/TestFunctionalserialLogsFileCmd3752379577/001/logs.txt: (1.251974907s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.38s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-158571 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-158571
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-158571: exit status 115 (229.018361ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.7:32191 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-158571 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.38s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-158571 config get cpus: exit status 14 (64.316343ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-158571 config get cpus: exit status 14 (62.793907ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (44.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-158571 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-158571 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 23036: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (44.76s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-158571 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-158571 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (117.274366ms)

                                                
                                                
-- stdout --
	* [functional-158571] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-12744/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12744/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:16:58.322836   22928 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:16:58.322972   22928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:16:58.322984   22928 out.go:374] Setting ErrFile to fd 2...
	I1205 06:16:58.322990   22928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:16:58.323299   22928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12744/.minikube/bin
	I1205 06:16:58.323917   22928 out.go:368] Setting JSON to false
	I1205 06:16:58.325090   22928 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3563,"bootTime":1764911855,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 06:16:58.325163   22928 start.go:143] virtualization: kvm guest
	I1205 06:16:58.327036   22928 out.go:179] * [functional-158571] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 06:16:58.328638   22928 notify.go:221] Checking for updates...
	I1205 06:16:58.328654   22928 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:16:58.330473   22928 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:16:58.331924   22928 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12744/kubeconfig
	I1205 06:16:58.333305   22928 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12744/.minikube
	I1205 06:16:58.334603   22928 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 06:16:58.336122   22928 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:16:58.338024   22928 config.go:182] Loaded profile config "functional-158571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:16:58.338756   22928 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:16:58.375727   22928 out.go:179] * Using the kvm2 driver based on existing profile
	I1205 06:16:58.376998   22928 start.go:309] selected driver: kvm2
	I1205 06:16:58.377016   22928 start.go:927] validating driver "kvm2" against &{Name:functional-158571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-158571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
tString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:16:58.377115   22928 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:16:58.379445   22928 out.go:203] 
	W1205 06:16:58.380746   22928 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1205 06:16:58.381920   22928 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-158571 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-158571 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-158571 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (116.219521ms)

                                                
                                                
-- stdout --
	* [functional-158571] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-12744/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12744/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:16:58.572644   22968 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:16:58.572775   22968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:16:58.572788   22968 out.go:374] Setting ErrFile to fd 2...
	I1205 06:16:58.572795   22968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:16:58.573197   22968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12744/.minikube/bin
	I1205 06:16:58.573723   22968 out.go:368] Setting JSON to false
	I1205 06:16:58.574801   22968 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3564,"bootTime":1764911855,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 06:16:58.574880   22968 start.go:143] virtualization: kvm guest
	I1205 06:16:58.576546   22968 out.go:179] * [functional-158571] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1205 06:16:58.578083   22968 notify.go:221] Checking for updates...
	I1205 06:16:58.578099   22968 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:16:58.579259   22968 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:16:58.580436   22968 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12744/kubeconfig
	I1205 06:16:58.581509   22968 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12744/.minikube
	I1205 06:16:58.582672   22968 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 06:16:58.583847   22968 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:16:58.585526   22968 config.go:182] Loaded profile config "functional-158571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:16:58.586003   22968 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:16:58.616198   22968 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1205 06:16:58.617348   22968 start.go:309] selected driver: kvm2
	I1205 06:16:58.617365   22968 start.go:927] validating driver "kvm2" against &{Name:functional-158571 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-158571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
tString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:16:58.617489   22968 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:16:58.619762   22968 out.go:203] 
	W1205 06:16:58.620979   22968 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1205 06:16:58.622003   22968 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-158571 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-158571 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-t2tkp" [e91debad-f87c-401b-8f1b-bfb09d785e3e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-t2tkp" [e91debad-f87c-401b-8f1b-bfb09d785e3e] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003134509s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.7:32113
functional_test.go:1680: http://192.168.39.7:32113: success! body:
Request served by hello-node-connect-7d85dfc575-t2tkp

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.7:32113
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.44s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (49.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [6cc3caae-eebd-4847-abf7-76c32e82b35e] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003556605s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-158571 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-158571 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-158571 get pvc myclaim -o=json
I1205 06:16:53.898961   16702 retry.go:31] will retry after 2.851365798s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:36aefe33-f226-4491-b490-a21c40c57c17 ResourceVersion:734 Generation:0 CreationTimestamp:2025-12-05 06:16:53 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-36aefe33-f226-4491-b490-a21c40c57c17 StorageClassName:0xc00196ad00 VolumeMode:0xc00196ad10 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-158571 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-158571 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [3fd6f054-64d1-4536-aa05-7b7c66e2ba58] Pending
helpers_test.go:352: "sp-pod" [3fd6f054-64d1-4536-aa05-7b7c66e2ba58] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [3fd6f054-64d1-4536-aa05-7b7c66e2ba58] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004815266s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-158571 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-158571 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-158571 delete -f testdata/storage-provisioner/pod.yaml: (5.080563614s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-158571 apply -f testdata/storage-provisioner/pod.yaml
I1205 06:17:16.774414   16702 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [ec95154a-6289-4748-8340-dfbc93482106] Pending
helpers_test.go:352: "sp-pod" [ec95154a-6289-4748-8340-dfbc93482106] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [ec95154a-6289-4748-8340-dfbc93482106] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.004091337s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-158571 exec sp-pod -- ls /tmp/mount
2025/12/05 06:17:43 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (49.24s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh -n functional-158571 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 cp functional-158571:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2396010752/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh -n functional-158571 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh -n functional-158571 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-158571 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-594nj" [f62682ab-cd4e-408c-a31d-7373b1a99f4e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-594nj" [f62682ab-cd4e-408c-a31d-7373b1a99f4e] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.475942468s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-158571 exec mysql-5bb876957f-594nj -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-158571 exec mysql-5bb876957f-594nj -- mysql -ppassword -e "show databases;": exit status 1 (860.570361ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1205 06:17:26.411870   16702 retry.go:31] will retry after 1.487120948s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-158571 exec mysql-5bb876957f-594nj -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.13s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/16702/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh "sudo cat /etc/test/nested/copy/16702/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/16702.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh "sudo cat /etc/ssl/certs/16702.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/16702.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh "sudo cat /usr/share/ca-certificates/16702.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/167022.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh "sudo cat /etc/ssl/certs/167022.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/167022.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh "sudo cat /usr/share/ca-certificates/167022.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-158571 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-158571 ssh "sudo systemctl is-active docker": exit status 1 (166.107516ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-158571 ssh "sudo systemctl is-active containerd": exit status 1 (156.027839ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-158571 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-158571 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-5tcx5" [911f4918-4238-4825-b58f-df3e5ec2c0f3] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-5tcx5" [911f4918-4238-4825-b58f-df3e5ec2c0f3] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004514093s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-158571 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-158571  │ 9056ab77afb8e │ 4.94MB │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/minikube-local-cache-test     │ functional-158571  │ e9bc44d5c4fe1 │ 3.33kB │
│ localhost/my-image                      │ functional-158571  │ 78a83a2725858 │ 1.47MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-158571 image ls --format table --alsologtostderr:
I1205 06:17:26.469940   23447 out.go:360] Setting OutFile to fd 1 ...
I1205 06:17:26.470171   23447 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:17:26.470179   23447 out.go:374] Setting ErrFile to fd 2...
I1205 06:17:26.470183   23447 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:17:26.470394   23447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12744/.minikube/bin
I1205 06:17:26.471010   23447 config.go:182] Loaded profile config "functional-158571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1205 06:17:26.471106   23447 config.go:182] Loaded profile config "functional-158571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1205 06:17:26.473202   23447 ssh_runner.go:195] Run: systemctl --version
I1205 06:17:26.475736   23447 main.go:143] libmachine: domain functional-158571 has defined MAC address 52:54:00:b0:54:27 in network mk-functional-158571
I1205 06:17:26.476218   23447 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b0:54:27", ip: ""} in network mk-functional-158571: {Iface:virbr1 ExpiryTime:2025-12-05 07:14:17 +0000 UTC Type:0 Mac:52:54:00:b0:54:27 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:functional-158571 Clientid:01:52:54:00:b0:54:27}
I1205 06:17:26.476244   23447 main.go:143] libmachine: domain functional-158571 has defined IP address 192.168.39.7 and MAC address 52:54:00:b0:54:27 in network mk-functional-158571
I1205 06:17:26.476394   23447 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/functional-158571/id_rsa Username:docker}
I1205 06:17:26.557808   23447 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-158571 image ls --format json --alsologtostderr:
[{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"e9bc44d5c4fe114cfe36ff8199ebbeaff448c4842982432ba8648d8896001650","repoDigests":["localhost/minikube-local-cache-test@sha256:e701a3cb6da7d451d9ea09191a7059e17ae649a7e0f0bcd96c6223aa680a8665"],"repoTags":["localhost/minikube-local-cache-test:functional-158571"],"size":"3330"},{"id":"52546a367c
c9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube
-proxy:v1.34.2"],"size":"73145240"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34
.2"],"size":"89046001"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicb
ase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-158571"],"size":"4944818"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"77c3eaaf425634393d8fc238f4ec7c0708e9e1a252e713855673c3698eaa11c0","repoDigests":["docker.io/library/a44564f75e1d5225206ca3e5c5bf52b018d9559e67bf49c466a2e568badfb7c5-tmp@sha256:3d701ee32287e54015f8536cddfba86a3f0f2a4b97324bfd27c
deb9ec0c24b76"],"repoTags":[],"size":"1466018"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:
ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"78a83a27258588324f911c853abe24de0a9f7874b10575a7961fe4b000ce862c","repoDigests":["localhost/my-image@sha256:79729bc8a4e02992408292db15bdacff7be7d3fa377e436a7a099314e8da3d18"],"repoTags":["localhost/my-image:functional-158571"],"size":"1468598"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-158571 image ls --format json --alsologtostderr:
I1205 06:17:25.808372   23436 out.go:360] Setting OutFile to fd 1 ...
I1205 06:17:25.808503   23436 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:17:25.808512   23436 out.go:374] Setting ErrFile to fd 2...
I1205 06:17:25.808517   23436 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:17:25.808720   23436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12744/.minikube/bin
I1205 06:17:25.809234   23436 config.go:182] Loaded profile config "functional-158571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1205 06:17:25.809325   23436 config.go:182] Loaded profile config "functional-158571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1205 06:17:25.811527   23436 ssh_runner.go:195] Run: systemctl --version
I1205 06:17:25.814232   23436 main.go:143] libmachine: domain functional-158571 has defined MAC address 52:54:00:b0:54:27 in network mk-functional-158571
I1205 06:17:25.814710   23436 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b0:54:27", ip: ""} in network mk-functional-158571: {Iface:virbr1 ExpiryTime:2025-12-05 07:14:17 +0000 UTC Type:0 Mac:52:54:00:b0:54:27 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:functional-158571 Clientid:01:52:54:00:b0:54:27}
I1205 06:17:25.814738   23436 main.go:143] libmachine: domain functional-158571 has defined IP address 192.168.39.7 and MAC address 52:54:00:b0:54:27 in network mk-functional-158571
I1205 06:17:25.814902   23436 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/functional-158571/id_rsa Username:docker}
I1205 06:17:25.918441   23436 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 image ls --format yaml --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-158571 image ls --format yaml --alsologtostderr: (1.602106154s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-158571 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: e9bc44d5c4fe114cfe36ff8199ebbeaff448c4842982432ba8648d8896001650
repoDigests:
- localhost/minikube-local-cache-test@sha256:e701a3cb6da7d451d9ea09191a7059e17ae649a7e0f0bcd96c6223aa680a8665
repoTags:
- localhost/minikube-local-cache-test:functional-158571
size: "3330"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-158571
size: "4944818"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-158571 image ls --format yaml --alsologtostderr:
I1205 06:17:17.377269   23369 out.go:360] Setting OutFile to fd 1 ...
I1205 06:17:17.377358   23369 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:17:17.377362   23369 out.go:374] Setting ErrFile to fd 2...
I1205 06:17:17.377366   23369 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:17:17.377592   23369 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12744/.minikube/bin
I1205 06:17:17.378118   23369 config.go:182] Loaded profile config "functional-158571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1205 06:17:17.378209   23369 config.go:182] Loaded profile config "functional-158571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1205 06:17:17.380216   23369 ssh_runner.go:195] Run: systemctl --version
I1205 06:17:17.382741   23369 main.go:143] libmachine: domain functional-158571 has defined MAC address 52:54:00:b0:54:27 in network mk-functional-158571
I1205 06:17:17.383191   23369 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b0:54:27", ip: ""} in network mk-functional-158571: {Iface:virbr1 ExpiryTime:2025-12-05 07:14:17 +0000 UTC Type:0 Mac:52:54:00:b0:54:27 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:functional-158571 Clientid:01:52:54:00:b0:54:27}
I1205 06:17:17.383222   23369 main.go:143] libmachine: domain functional-158571 has defined IP address 192.168.39.7 and MAC address 52:54:00:b0:54:27 in network mk-functional-158571
I1205 06:17:17.383380   23369 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/functional-158571/id_rsa Username:docker}
I1205 06:17:17.473037   23369 ssh_runner.go:195] Run: sudo crictl images --output json
I1205 06:17:18.917616   23369 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.444534611s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.52463442s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-158571
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 image load --daemon kicbase/echo-server:functional-158571 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 image load --daemon kicbase/echo-server:functional-158571 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-158571
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 image load --daemon kicbase/echo-server:functional-158571 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 image save kicbase/echo-server:functional-158571 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 image rm kicbase/echo-server:functional-158571 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-158571
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 image save --daemon kicbase/echo-server:functional-158571 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-158571
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 service list -o json
functional_test.go:1504: Took "233.513604ms" to run "out/minikube-linux-amd64 -p functional-158571 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.7:30371
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.7:30371
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (15.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-158571 /tmp/TestFunctionalparallelMountCmdany-port4190849618/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764915416696216167" to /tmp/TestFunctionalparallelMountCmdany-port4190849618/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764915416696216167" to /tmp/TestFunctionalparallelMountCmdany-port4190849618/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764915416696216167" to /tmp/TestFunctionalparallelMountCmdany-port4190849618/001/test-1764915416696216167
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-158571 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (164.773022ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 06:16:56.861345   16702 retry.go:31] will retry after 503.441901ms: exit status 1
I1205 06:16:56.965759   16702 detect.go:223] nested VM detected
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  5 06:16 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  5 06:16 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  5 06:16 test-1764915416696216167
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh cat /mount-9p/test-1764915416696216167
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-158571 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [117bb437-d233-445e-a50a-df86eb9b8097] Pending
helpers_test.go:352: "busybox-mount" [117bb437-d233-445e-a50a-df86eb9b8097] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [117bb437-d233-445e-a50a-df86eb9b8097] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [117bb437-d233-445e-a50a-df86eb9b8097] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 13.002885232s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-158571 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-158571 /tmp/TestFunctionalparallelMountCmdany-port4190849618/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (15.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "225.736926ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "60.906648ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "270.528462ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "58.271301ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-158571 /tmp/TestFunctionalparallelMountCmdspecific-port799420193/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-158571 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (173.307904ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 06:17:12.072562   16702 retry.go:31] will retry after 413.519746ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-158571 /tmp/TestFunctionalparallelMountCmdspecific-port799420193/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-158571 ssh "sudo umount -f /mount-9p": exit status 1 (174.363266ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-158571 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-158571 /tmp/TestFunctionalparallelMountCmdspecific-port799420193/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-158571 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1577575440/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-158571 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1577575440/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-158571 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1577575440/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-158571 ssh "findmnt -T" /mount1: exit status 1 (195.585527ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 06:17:13.408853   16702 retry.go:31] will retry after 458.720493ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-158571 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-158571 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-158571 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1577575440/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-158571 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1577575440/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-158571 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1577575440/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.27s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-158571
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-158571
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-158571
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21997-12744/.minikube/files/etc/test/nested/copy/16702/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (85.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-895947 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1205 06:18:27.901258   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:18:55.616169   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-895947 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m25.34539808s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (85.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (38.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1205 06:19:09.756929   16702 config.go:182] Loaded profile config "functional-895947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-895947 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-895947 --alsologtostderr -v=8: (38.711001221s)
functional_test.go:678: soft start took 38.711367624s for "functional-895947" cluster.
I1205 06:19:48.468235   16702 config.go:182] Loaded profile config "functional-895947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (38.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-895947 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-895947 cache add registry.k8s.io/pause:latest: (1.049610826s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-895947 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach28534594/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 cache add minikube-local-cache-test:functional-895947
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-895947 cache add minikube-local-cache-test:functional-895947: (1.562402935s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 cache delete minikube-local-cache-test:functional-895947
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-895947
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-895947 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (172.16124ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 kubectl -- --context functional-895947 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-895947 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (34.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-895947 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-895947 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.499984843s)
functional_test.go:776: restart took 34.500097512s for "functional-895947" cluster.
I1205 06:20:30.106191   16702 config.go:182] Loaded profile config "functional-895947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (34.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-895947 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-895947 logs: (1.283102911s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1863270472/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-895947 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1863270472/001/logs.txt: (1.275815873s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-895947 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-895947
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-895947: exit status 115 (232.311395ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.53:30897 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-895947 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-895947 delete -f testdata/invalidsvc.yaml: (1.110170767s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-895947 config get cpus: exit status 14 (70.567966ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-895947 config get cpus: exit status 14 (57.920827ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (32.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-895947 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-895947 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 26179: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (32.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-895947 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-895947 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (126.452876ms)

                                                
                                                
-- stdout --
	* [functional-895947] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-12744/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12744/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:20:47.477823   25891 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:20:47.478576   25891 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:20:47.478589   25891 out.go:374] Setting ErrFile to fd 2...
	I1205 06:20:47.478596   25891 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:20:47.478930   25891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12744/.minikube/bin
	I1205 06:20:47.479534   25891 out.go:368] Setting JSON to false
	I1205 06:20:47.480788   25891 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3792,"bootTime":1764911855,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 06:20:47.480880   25891 start.go:143] virtualization: kvm guest
	I1205 06:20:47.482294   25891 out.go:179] * [functional-895947] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 06:20:47.483572   25891 notify.go:221] Checking for updates...
	I1205 06:20:47.483588   25891 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:20:47.484943   25891 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:20:47.486359   25891 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12744/kubeconfig
	I1205 06:20:47.487925   25891 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12744/.minikube
	I1205 06:20:47.489240   25891 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 06:20:47.490659   25891 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:20:47.492478   25891 config.go:182] Loaded profile config "functional-895947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 06:20:47.493182   25891 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:20:47.524708   25891 out.go:179] * Using the kvm2 driver based on existing profile
	I1205 06:20:47.525938   25891 start.go:309] selected driver: kvm2
	I1205 06:20:47.525959   25891 start.go:927] validating driver "kvm2" against &{Name:functional-895947 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-895947 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:20:47.526093   25891 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:20:47.528987   25891 out.go:203] 
	W1205 06:20:47.530106   25891 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1205 06:20:47.531305   25891 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-895947 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-895947 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-895947 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (130.981279ms)

                                                
                                                
-- stdout --
	* [functional-895947] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-12744/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12744/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:20:47.729016   25923 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:20:47.729148   25923 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:20:47.729162   25923 out.go:374] Setting ErrFile to fd 2...
	I1205 06:20:47.729170   25923 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:20:47.729632   25923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12744/.minikube/bin
	I1205 06:20:47.730211   25923 out.go:368] Setting JSON to false
	I1205 06:20:47.731367   25923 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3793,"bootTime":1764911855,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 06:20:47.731439   25923 start.go:143] virtualization: kvm guest
	I1205 06:20:47.733379   25923 out.go:179] * [functional-895947] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1205 06:20:47.735088   25923 notify.go:221] Checking for updates...
	I1205 06:20:47.735132   25923 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:20:47.736445   25923 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:20:47.737779   25923 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12744/kubeconfig
	I1205 06:20:47.738997   25923 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12744/.minikube
	I1205 06:20:47.740260   25923 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 06:20:47.741499   25923 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:20:47.743329   25923 config.go:182] Loaded profile config "functional-895947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 06:20:47.744049   25923 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:20:47.776319   25923 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1205 06:20:47.777622   25923 start.go:309] selected driver: kvm2
	I1205 06:20:47.777643   25923 start.go:927] validating driver "kvm2" against &{Name:functional-895947 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-895947 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:20:47.777813   25923 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:20:47.780294   25923 out.go:203] 
	W1205 06:20:47.781699   25923 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1205 06:20:47.783043   25923 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (9.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-895947 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-895947 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-9f67c86d4-67p2g" [cafdf87f-1247-464d-a2e5-47cade8550cd] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-9f67c86d4-67p2g" [cafdf87f-1247-464d-a2e5-47cade8550cd] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.011047765s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.53:31292
functional_test.go:1680: http://192.168.39.53:31292: success! body:
Request served by hello-node-connect-9f67c86d4-67p2g

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.53:31292
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (9.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (44.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [fe95776c-63ac-4a61-a1dd-bd4bb3201f7c] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003792304s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-895947 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-895947 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-895947 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-895947 apply -f testdata/storage-provisioner/pod.yaml
I1205 06:20:44.042406   16702 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [cf2d9aad-0fc8-4df0-8a50-c42ed2f2d59f] Pending
helpers_test.go:352: "sp-pod" [cf2d9aad-0fc8-4df0-8a50-c42ed2f2d59f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [cf2d9aad-0fc8-4df0-8a50-c42ed2f2d59f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.007336441s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-895947 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-895947 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-895947 apply -f testdata/storage-provisioner/pod.yaml
I1205 06:20:59.823748   16702 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [2a574b16-2b2f-4d38-846f-87d0da493ebc] Pending
helpers_test.go:352: "sp-pod" [2a574b16-2b2f-4d38-846f-87d0da493ebc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [2a574b16-2b2f-4d38-846f-87d0da493ebc] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.013441768s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-895947 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (44.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh -n functional-895947 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 cp functional-895947:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp2127932142/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh -n functional-895947 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh -n functional-895947 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (32.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-895947 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-844cf969f6-zrkfp" [b8778eaf-c3f7-48cc-a107-9115a6ef2d3e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-844cf969f6-zrkfp" [b8778eaf-c3f7-48cc-a107-9115a6ef2d3e] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 28.003358079s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-895947 exec mysql-844cf969f6-zrkfp -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-895947 exec mysql-844cf969f6-zrkfp -- mysql -ppassword -e "show databases;": exit status 1 (121.763496ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1205 06:21:20.260838   16702 retry.go:31] will retry after 1.475921084s: exit status 1
2025/12/05 06:21:20 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1812: (dbg) Run:  kubectl --context functional-895947 exec mysql-844cf969f6-zrkfp -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-895947 exec mysql-844cf969f6-zrkfp -- mysql -ppassword -e "show databases;": exit status 1 (149.212485ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1205 06:21:21.887284   16702 retry.go:31] will retry after 1.891955283s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-895947 exec mysql-844cf969f6-zrkfp -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (32.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/16702/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh "sudo cat /etc/test/nested/copy/16702/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/16702.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh "sudo cat /etc/ssl/certs/16702.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/16702.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh "sudo cat /usr/share/ca-certificates/16702.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/167022.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh "sudo cat /etc/ssl/certs/167022.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/167022.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh "sudo cat /usr/share/ca-certificates/167022.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-895947 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-895947 ssh "sudo systemctl is-active docker": exit status 1 (219.853367ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-895947 ssh "sudo systemctl is-active containerd": exit status 1 (213.489113ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (9.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-895947 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-895947 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-5758569b79-nl2mm" [9e1ab4c4-52f6-4703-870a-7aec241b4248] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-5758569b79-nl2mm" [9e1ab4c4-52f6-4703-870a-7aec241b4248] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.004833848s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (9.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "245.516381ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "58.874979ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "266.672697ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "65.161716ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (7.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-895947 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1952775502/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764915640027350973" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1952775502/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764915640027350973" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1952775502/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764915640027350973" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1952775502/001/test-1764915640027350973
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-895947 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (166.101232ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 06:20:40.193780   16702 retry.go:31] will retry after 681.140631ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  5 06:20 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  5 06:20 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  5 06:20 test-1764915640027350973
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh cat /mount-9p/test-1764915640027350973
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-895947 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [3fcd6ba5-88c6-4100-bdd8-efd273bd7907] Pending
helpers_test.go:352: "busybox-mount" [3fcd6ba5-88c6-4100-bdd8-efd273bd7907] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [3fcd6ba5-88c6-4100-bdd8-efd273bd7907] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [3fcd6ba5-88c6-4100-bdd8-efd273bd7907] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003973811s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-895947 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-895947 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1952775502/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (7.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 service list -o json
functional_test.go:1504: Took "232.747803ms" to run "out/minikube-linux-amd64 -p functional-895947 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.53:32311
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-895947 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2718859353/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-895947 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (253.79035ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 06:20:47.437616   16702 retry.go:31] will retry after 723.445381ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-895947 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2718859353/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-895947 ssh "sudo umount -f /mount-9p": exit status 1 (210.629914ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-895947 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-895947 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2718859353/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.53:32311
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-895947 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
localhost/minikube-local-cache-test:functional-895947
localhost/kicbase/echo-server:functional-895947
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-895947 image ls --format short --alsologtostderr:
I1205 06:21:00.719372   26518 out.go:360] Setting OutFile to fd 1 ...
I1205 06:21:00.719623   26518 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:21:00.719632   26518 out.go:374] Setting ErrFile to fd 2...
I1205 06:21:00.719636   26518 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:21:00.719821   26518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12744/.minikube/bin
I1205 06:21:00.720337   26518 config.go:182] Loaded profile config "functional-895947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 06:21:00.720427   26518 config.go:182] Loaded profile config "functional-895947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 06:21:00.722257   26518 ssh_runner.go:195] Run: systemctl --version
I1205 06:21:00.724420   26518 main.go:143] libmachine: domain functional-895947 has defined MAC address 52:54:00:60:fb:f8 in network mk-functional-895947
I1205 06:21:00.724890   26518 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:fb:f8", ip: ""} in network mk-functional-895947: {Iface:virbr1 ExpiryTime:2025-12-05 07:17:59 +0000 UTC Type:0 Mac:52:54:00:60:fb:f8 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:functional-895947 Clientid:01:52:54:00:60:fb:f8}
I1205 06:21:00.724916   26518 main.go:143] libmachine: domain functional-895947 has defined IP address 192.168.39.53 and MAC address 52:54:00:60:fb:f8 in network mk-functional-895947
I1205 06:21:00.725091   26518 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/functional-895947/id_rsa Username:docker}
I1205 06:21:00.812198   26518 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-895947 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ latest            │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/busybox             │ latest            │ beae173ccac6a │ 1.46MB │
│ localhost/my-image                      │ functional-895947 │ 5113ef9c44786 │ 1.47MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0           │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/pause                   │ 3.3               │ 0184c1613d929 │ 686kB  │
│ docker.io/kicbase/echo-server           │ latest            │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-895947 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/library/nginx                 │ latest            │ 60adc2e137e75 │ 155MB  │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0    │ 45f3cc72d235f │ 76.9MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc      │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test     │ functional-895947 │ e9bc44d5c4fe1 │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1           │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0    │ 8a4ded35a3eb1 │ 72MB   │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0    │ aa9d02839d8de │ 90.8MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0    │ 7bb6219ddab95 │ 52.7MB │
│ registry.k8s.io/pause                   │ 3.1               │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1            │ cd073f4c5f6a8 │ 740kB  │
└─────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-895947 image ls --format table --alsologtostderr:
I1205 06:21:05.973670   26597 out.go:360] Setting OutFile to fd 1 ...
I1205 06:21:05.973914   26597 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:21:05.973922   26597 out.go:374] Setting ErrFile to fd 2...
I1205 06:21:05.973926   26597 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:21:05.974116   26597 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12744/.minikube/bin
I1205 06:21:05.974632   26597 config.go:182] Loaded profile config "functional-895947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 06:21:05.974734   26597 config.go:182] Loaded profile config "functional-895947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 06:21:05.976843   26597 ssh_runner.go:195] Run: systemctl --version
I1205 06:21:05.978911   26597 main.go:143] libmachine: domain functional-895947 has defined MAC address 52:54:00:60:fb:f8 in network mk-functional-895947
I1205 06:21:05.979310   26597 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:fb:f8", ip: ""} in network mk-functional-895947: {Iface:virbr1 ExpiryTime:2025-12-05 07:17:59 +0000 UTC Type:0 Mac:52:54:00:60:fb:f8 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:functional-895947 Clientid:01:52:54:00:60:fb:f8}
I1205 06:21:05.979335   26597 main.go:143] libmachine: domain functional-895947 has defined IP address 192.168.39.53 and MAC address 52:54:00:60:fb:f8 in network mk-functional-895947
I1205 06:21:05.979472   26597 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/functional-895947/id_rsa Username:docker}
I1205 06:21:06.079287   26597 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-895947 image ls --format json --alsologtostderr:
[{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:09c404d47c88be54eaaf0af6edaecdc1a417bcf04522ffeaf62c4dc0ed5a6d10"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63582165"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:dd50de52ebf30a673c65da77c8b4af5cbc6be3c475a2d8165796a7a7bdd0b9d5"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90816810"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:f852fad6b028092c481b57e7fcd16936a8aec43c2e4dccf5a0600946a449c2a3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"52744336"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:
3.3"],"size":"686139"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-895947"],"size":"4944818"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f9
52adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5e3bd70d468022881b995e23abf02a2d39ee87ebacd7018f6c478d9e01870b8b"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"76869776"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9
ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:a8ad62a46c568df922febd0986d02f88bfe5e1a8f5e8dd0bd02a0cafffba019b"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"739536"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/cored
ns/coredns@sha256:dfca5e5f4caae19c3ac20d841ab02fe19647ef0dd97c41424007cceb417af7db"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79190589"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31468661"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:0ed737a63ad50cf0d7049b0bd88755be8d5bc9fb5e39efdece79639b998532f6"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"
size":"71976228"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"4de3d3915347ec7c62e35eb8bd3ab1303a104008edd69a522a6946bbac7f3950","repoDigests":["docker.io/library/d74474d0e3bee74479a4cdae970a7bc10b2ad9d46825c002276bb6f1221ca892-tmp@sha256:bde6dd751dfd49b3ce81b126bcd2b59429dc871416f6a57e82cdf853c415c083"],"repoTags":[],"size":"1466018"},{"id":"e9bc44d5c4fe114cfe36ff8199ebbeaff448c4842982432ba8648d8896001650","repoDigests":["localhost/minikube-local-cache-test@sha256:e701a3cb6da7d451d9ea09191a7059e17ae649a7e0f0bcd96c6223aa680a8665"],"repoTags":["localhost/minikube-local-cache-test:functional-895947"],"size":"3330"},{"id":"5113ef9c44786f75d17ae85df860c6fffd032ee97ebff3e7cc8a9749f8d2d96b","repoDigests":["loca
lhost/my-image@sha256:a2c3600b929c9309c665d6e339ea77b4fc1289e03ea5a0bb43f2645b958a4cf5"],"repoTags":["localhost/my-image:functional-895947"],"size":"1468600"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-895947 image ls --format json --alsologtostderr:
I1205 06:21:05.487578   26587 out.go:360] Setting OutFile to fd 1 ...
I1205 06:21:05.487842   26587 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:21:05.487852   26587 out.go:374] Setting ErrFile to fd 2...
I1205 06:21:05.487856   26587 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:21:05.488108   26587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12744/.minikube/bin
I1205 06:21:05.488669   26587 config.go:182] Loaded profile config "functional-895947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 06:21:05.488790   26587 config.go:182] Loaded profile config "functional-895947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 06:21:05.490955   26587 ssh_runner.go:195] Run: systemctl --version
I1205 06:21:05.493328   26587 main.go:143] libmachine: domain functional-895947 has defined MAC address 52:54:00:60:fb:f8 in network mk-functional-895947
I1205 06:21:05.493823   26587 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:fb:f8", ip: ""} in network mk-functional-895947: {Iface:virbr1 ExpiryTime:2025-12-05 07:17:59 +0000 UTC Type:0 Mac:52:54:00:60:fb:f8 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:functional-895947 Clientid:01:52:54:00:60:fb:f8}
I1205 06:21:05.493847   26587 main.go:143] libmachine: domain functional-895947 has defined IP address 192.168.39.53 and MAC address 52:54:00:60:fb:f8 in network mk-functional-895947
I1205 06:21:05.494000   26587 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/functional-895947/id_rsa Username:docker}
I1205 06:21:05.609004   26587 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-895947 image ls --format yaml --alsologtostderr:
- id: e9bc44d5c4fe114cfe36ff8199ebbeaff448c4842982432ba8648d8896001650
repoDigests:
- localhost/minikube-local-cache-test@sha256:e701a3cb6da7d451d9ea09191a7059e17ae649a7e0f0bcd96c6223aa680a8665
repoTags:
- localhost/minikube-local-cache-test:functional-895947
size: "3330"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:dfca5e5f4caae19c3ac20d841ab02fe19647ef0dd97c41424007cceb417af7db
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79190589"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:09c404d47c88be54eaaf0af6edaecdc1a417bcf04522ffeaf62c4dc0ed5a6d10
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63582165"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5e3bd70d468022881b995e23abf02a2d39ee87ebacd7018f6c478d9e01870b8b
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76869776"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:f852fad6b028092c481b57e7fcd16936a8aec43c2e4dccf5a0600946a449c2a3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52744336"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:a8ad62a46c568df922febd0986d02f88bfe5e1a8f5e8dd0bd02a0cafffba019b
repoTags:
- registry.k8s.io/pause:3.10.1
size: "739536"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31468661"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0ed737a63ad50cf0d7049b0bd88755be8d5bc9fb5e39efdece79639b998532f6
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71976228"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:dd50de52ebf30a673c65da77c8b4af5cbc6be3c475a2d8165796a7a7bdd0b9d5
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90816810"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-895947
size: "4944818"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-895947 image ls --format yaml --alsologtostderr:
I1205 06:21:00.933037   26529 out.go:360] Setting OutFile to fd 1 ...
I1205 06:21:00.933336   26529 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:21:00.933348   26529 out.go:374] Setting ErrFile to fd 2...
I1205 06:21:00.933356   26529 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:21:00.933641   26529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12744/.minikube/bin
I1205 06:21:00.934421   26529 config.go:182] Loaded profile config "functional-895947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 06:21:00.934560   26529 config.go:182] Loaded profile config "functional-895947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 06:21:00.936945   26529 ssh_runner.go:195] Run: systemctl --version
I1205 06:21:00.939572   26529 main.go:143] libmachine: domain functional-895947 has defined MAC address 52:54:00:60:fb:f8 in network mk-functional-895947
I1205 06:21:00.940005   26529 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:fb:f8", ip: ""} in network mk-functional-895947: {Iface:virbr1 ExpiryTime:2025-12-05 07:17:59 +0000 UTC Type:0 Mac:52:54:00:60:fb:f8 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:functional-895947 Clientid:01:52:54:00:60:fb:f8}
I1205 06:21:00.940046   26529 main.go:143] libmachine: domain functional-895947 has defined IP address 192.168.39.53 and MAC address 52:54:00:60:fb:f8 in network mk-functional-895947
I1205 06:21:00.940225   26529 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/functional-895947/id_rsa Username:docker}
I1205 06:21:01.037950   26529 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (4.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-895947 ssh pgrep buildkitd: exit status 1 (170.858868ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 image build -t localhost/my-image:functional-895947 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-895947 image build -t localhost/my-image:functional-895947 testdata/build --alsologtostderr: (3.627608527s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-895947 image build -t localhost/my-image:functional-895947 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4de3d391534
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-895947
--> 5113ef9c447
Successfully tagged localhost/my-image:functional-895947
5113ef9c44786f75d17ae85df860c6fffd032ee97ebff3e7cc8a9749f8d2d96b
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-895947 image build -t localhost/my-image:functional-895947 testdata/build --alsologtostderr:
I1205 06:21:01.347960   26550 out.go:360] Setting OutFile to fd 1 ...
I1205 06:21:01.348203   26550 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:21:01.348215   26550 out.go:374] Setting ErrFile to fd 2...
I1205 06:21:01.348219   26550 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:21:01.348394   26550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12744/.minikube/bin
I1205 06:21:01.348923   26550 config.go:182] Loaded profile config "functional-895947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 06:21:01.349569   26550 config.go:182] Loaded profile config "functional-895947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 06:21:01.351722   26550 ssh_runner.go:195] Run: systemctl --version
I1205 06:21:01.354183   26550 main.go:143] libmachine: domain functional-895947 has defined MAC address 52:54:00:60:fb:f8 in network mk-functional-895947
I1205 06:21:01.354573   26550 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:fb:f8", ip: ""} in network mk-functional-895947: {Iface:virbr1 ExpiryTime:2025-12-05 07:17:59 +0000 UTC Type:0 Mac:52:54:00:60:fb:f8 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:functional-895947 Clientid:01:52:54:00:60:fb:f8}
I1205 06:21:01.354607   26550 main.go:143] libmachine: domain functional-895947 has defined IP address 192.168.39.53 and MAC address 52:54:00:60:fb:f8 in network mk-functional-895947
I1205 06:21:01.354749   26550 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/functional-895947/id_rsa Username:docker}
I1205 06:21:01.459744   26550 build_images.go:162] Building image from path: /tmp/build.1625350083.tar
I1205 06:21:01.459811   26550 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1205 06:21:01.473570   26550 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1625350083.tar
I1205 06:21:01.480063   26550 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1625350083.tar: stat -c "%s %y" /var/lib/minikube/build/build.1625350083.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1625350083.tar': No such file or directory
I1205 06:21:01.480091   26550 ssh_runner.go:362] scp /tmp/build.1625350083.tar --> /var/lib/minikube/build/build.1625350083.tar (3072 bytes)
I1205 06:21:01.527377   26550 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1625350083
I1205 06:21:01.541751   26550 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1625350083 -xf /var/lib/minikube/build/build.1625350083.tar
I1205 06:21:01.558721   26550 crio.go:315] Building image: /var/lib/minikube/build/build.1625350083
I1205 06:21:01.558816   26550 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-895947 /var/lib/minikube/build/build.1625350083 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1205 06:21:04.864056   26550 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-895947 /var/lib/minikube/build/build.1625350083 --cgroup-manager=cgroupfs: (3.305208272s)
I1205 06:21:04.864151   26550 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1625350083
I1205 06:21:04.885337   26550 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1625350083.tar
I1205 06:21:04.909579   26550 build_images.go:218] Built localhost/my-image:functional-895947 from /tmp/build.1625350083.tar
I1205 06:21:04.909618   26550 build_images.go:134] succeeded building to: functional-895947
I1205 06:21:04.909623   26550 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (4.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-895947
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (3.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 image load --daemon kicbase/echo-server:functional-895947 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-895947 image load --daemon kicbase/echo-server:functional-895947 --alsologtostderr: (3.687515808s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (3.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-895947 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2925633403/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-895947 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2925633403/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-895947 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2925633403/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-895947 ssh "findmnt -T" /mount1: exit status 1 (237.685208ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 06:20:49.226782   16702 retry.go:31] will retry after 378.277427ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-895947 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-895947 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2925633403/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-895947 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2925633403/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-895947 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2925633403/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 image load --daemon kicbase/echo-server:functional-895947 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-895947
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 image load --daemon kicbase/echo-server:functional-895947 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 image save kicbase/echo-server:functional-895947 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 image rm kicbase/echo-server:functional-895947 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (2.77s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-895947 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.541361435s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (2.77s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-895947
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-895947 image save --daemon kicbase/echo-server:functional-895947 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-895947
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-895947
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-895947
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-895947
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (198.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1205 06:21:47.449522   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-158571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:21:47.455947   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-158571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:21:47.467425   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-158571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:21:47.488739   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-158571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:21:47.530182   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-158571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:21:47.611652   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-158571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:21:47.773560   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-158571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:21:48.095606   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-158571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:21:48.737737   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-158571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:21:50.020044   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-158571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:21:52.581971   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-158571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:21:57.703848   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-158571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:22:07.945572   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-158571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:22:28.427175   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-158571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:23:09.389225   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-158571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:23:27.901214   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:24:31.310950   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-158571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-162483 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m18.030838874s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (198.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-162483 kubectl -- rollout status deployment/busybox: (4.336687615s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 kubectl -- exec busybox-7b57f96db7-9ssb5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 kubectl -- exec busybox-7b57f96db7-ddlh7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 kubectl -- exec busybox-7b57f96db7-zf48c -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 kubectl -- exec busybox-7b57f96db7-9ssb5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 kubectl -- exec busybox-7b57f96db7-ddlh7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 kubectl -- exec busybox-7b57f96db7-zf48c -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 kubectl -- exec busybox-7b57f96db7-9ssb5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 kubectl -- exec busybox-7b57f96db7-ddlh7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 kubectl -- exec busybox-7b57f96db7-zf48c -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 kubectl -- exec busybox-7b57f96db7-9ssb5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 kubectl -- exec busybox-7b57f96db7-9ssb5 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 kubectl -- exec busybox-7b57f96db7-ddlh7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 kubectl -- exec busybox-7b57f96db7-ddlh7 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 kubectl -- exec busybox-7b57f96db7-zf48c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 kubectl -- exec busybox-7b57f96db7-zf48c -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (46.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-162483 node add --alsologtostderr -v 5: (45.508394206s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 status --alsologtostderr -v 5
E1205 06:25:37.468706   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-895947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:25:37.475108   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-895947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:25:37.486531   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-895947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:25:37.508546   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-895947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:25:37.550855   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-895947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:25:37.632716   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-895947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (46.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-162483 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
E1205 06:25:37.794830   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-895947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1205 06:25:38.116163   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-895947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 status --output json --alsologtostderr -v 5
E1205 06:25:38.758397   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-895947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 cp testdata/cp-test.txt ha-162483:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 cp ha-162483:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2386209350/001/cp-test_ha-162483.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 cp ha-162483:/home/docker/cp-test.txt ha-162483-m02:/home/docker/cp-test_ha-162483_ha-162483-m02.txt
E1205 06:25:40.040488   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-895947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483-m02 "sudo cat /home/docker/cp-test_ha-162483_ha-162483-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 cp ha-162483:/home/docker/cp-test.txt ha-162483-m03:/home/docker/cp-test_ha-162483_ha-162483-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483-m03 "sudo cat /home/docker/cp-test_ha-162483_ha-162483-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 cp ha-162483:/home/docker/cp-test.txt ha-162483-m04:/home/docker/cp-test_ha-162483_ha-162483-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483-m04 "sudo cat /home/docker/cp-test_ha-162483_ha-162483-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 cp testdata/cp-test.txt ha-162483-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 cp ha-162483-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2386209350/001/cp-test_ha-162483-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 cp ha-162483-m02:/home/docker/cp-test.txt ha-162483:/home/docker/cp-test_ha-162483-m02_ha-162483.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483-m02 "sudo cat /home/docker/cp-test.txt"
E1205 06:25:42.602802   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-895947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483 "sudo cat /home/docker/cp-test_ha-162483-m02_ha-162483.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 cp ha-162483-m02:/home/docker/cp-test.txt ha-162483-m03:/home/docker/cp-test_ha-162483-m02_ha-162483-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483-m03 "sudo cat /home/docker/cp-test_ha-162483-m02_ha-162483-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 cp ha-162483-m02:/home/docker/cp-test.txt ha-162483-m04:/home/docker/cp-test_ha-162483-m02_ha-162483-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483-m04 "sudo cat /home/docker/cp-test_ha-162483-m02_ha-162483-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 cp testdata/cp-test.txt ha-162483-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 cp ha-162483-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2386209350/001/cp-test_ha-162483-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 cp ha-162483-m03:/home/docker/cp-test.txt ha-162483:/home/docker/cp-test_ha-162483-m03_ha-162483.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483 "sudo cat /home/docker/cp-test_ha-162483-m03_ha-162483.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 cp ha-162483-m03:/home/docker/cp-test.txt ha-162483-m02:/home/docker/cp-test_ha-162483-m03_ha-162483-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483-m02 "sudo cat /home/docker/cp-test_ha-162483-m03_ha-162483-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 cp ha-162483-m03:/home/docker/cp-test.txt ha-162483-m04:/home/docker/cp-test_ha-162483-m03_ha-162483-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483-m04 "sudo cat /home/docker/cp-test_ha-162483-m03_ha-162483-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 cp testdata/cp-test.txt ha-162483-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 cp ha-162483-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2386209350/001/cp-test_ha-162483-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 cp ha-162483-m04:/home/docker/cp-test.txt ha-162483:/home/docker/cp-test_ha-162483-m04_ha-162483.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483 "sudo cat /home/docker/cp-test_ha-162483-m04_ha-162483.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 cp ha-162483-m04:/home/docker/cp-test.txt ha-162483-m02:/home/docker/cp-test_ha-162483-m04_ha-162483-m02.txt
E1205 06:25:47.724234   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-895947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483-m02 "sudo cat /home/docker/cp-test_ha-162483-m04_ha-162483-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 cp ha-162483-m04:/home/docker/cp-test.txt ha-162483-m03:/home/docker/cp-test_ha-162483-m04_ha-162483-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 ssh -n ha-162483-m03 "sudo cat /home/docker/cp-test_ha-162483-m04_ha-162483-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (83.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 node stop m02 --alsologtostderr -v 5
E1205 06:25:57.966543   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-895947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:26:18.448408   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-895947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:26:47.449715   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-158571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:26:59.410298   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-895947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-162483 node stop m02 --alsologtostderr -v 5: (1m23.108429601s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-162483 status --alsologtostderr -v 5: exit status 7 (505.793699ms)

                                                
                                                
-- stdout --
	ha-162483
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-162483-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-162483-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-162483-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:27:12.068127   29654 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:27:12.068431   29654 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:27:12.068448   29654 out.go:374] Setting ErrFile to fd 2...
	I1205 06:27:12.068454   29654 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:27:12.068783   29654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12744/.minikube/bin
	I1205 06:27:12.069015   29654 out.go:368] Setting JSON to false
	I1205 06:27:12.069040   29654 mustload.go:66] Loading cluster: ha-162483
	I1205 06:27:12.069101   29654 notify.go:221] Checking for updates...
	I1205 06:27:12.069471   29654 config.go:182] Loaded profile config "ha-162483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:27:12.069489   29654 status.go:174] checking status of ha-162483 ...
	I1205 06:27:12.072042   29654 status.go:371] ha-162483 host status = "Running" (err=<nil>)
	I1205 06:27:12.072070   29654 host.go:66] Checking if "ha-162483" exists ...
	I1205 06:27:12.074459   29654 main.go:143] libmachine: domain ha-162483 has defined MAC address 52:54:00:28:48:8f in network mk-ha-162483
	I1205 06:27:12.074920   29654 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:28:48:8f", ip: ""} in network mk-ha-162483: {Iface:virbr1 ExpiryTime:2025-12-05 07:21:39 +0000 UTC Type:0 Mac:52:54:00:28:48:8f Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-162483 Clientid:01:52:54:00:28:48:8f}
	I1205 06:27:12.074944   29654 main.go:143] libmachine: domain ha-162483 has defined IP address 192.168.39.171 and MAC address 52:54:00:28:48:8f in network mk-ha-162483
	I1205 06:27:12.075074   29654 host.go:66] Checking if "ha-162483" exists ...
	I1205 06:27:12.075243   29654 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:27:12.077738   29654 main.go:143] libmachine: domain ha-162483 has defined MAC address 52:54:00:28:48:8f in network mk-ha-162483
	I1205 06:27:12.078155   29654 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:28:48:8f", ip: ""} in network mk-ha-162483: {Iface:virbr1 ExpiryTime:2025-12-05 07:21:39 +0000 UTC Type:0 Mac:52:54:00:28:48:8f Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-162483 Clientid:01:52:54:00:28:48:8f}
	I1205 06:27:12.078179   29654 main.go:143] libmachine: domain ha-162483 has defined IP address 192.168.39.171 and MAC address 52:54:00:28:48:8f in network mk-ha-162483
	I1205 06:27:12.078356   29654 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/ha-162483/id_rsa Username:docker}
	I1205 06:27:12.166424   29654 ssh_runner.go:195] Run: systemctl --version
	I1205 06:27:12.173155   29654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:27:12.195504   29654 kubeconfig.go:125] found "ha-162483" server: "https://192.168.39.254:8443"
	I1205 06:27:12.195548   29654 api_server.go:166] Checking apiserver status ...
	I1205 06:27:12.195601   29654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:27:12.222818   29654 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1414/cgroup
	W1205 06:27:12.234934   29654 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1414/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:27:12.235002   29654 ssh_runner.go:195] Run: ls
	I1205 06:27:12.241250   29654 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1205 06:27:12.247209   29654 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1205 06:27:12.247236   29654 status.go:463] ha-162483 apiserver status = Running (err=<nil>)
	I1205 06:27:12.247246   29654 status.go:176] ha-162483 status: &{Name:ha-162483 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 06:27:12.247260   29654 status.go:174] checking status of ha-162483-m02 ...
	I1205 06:27:12.248874   29654 status.go:371] ha-162483-m02 host status = "Stopped" (err=<nil>)
	I1205 06:27:12.248889   29654 status.go:384] host is not running, skipping remaining checks
	I1205 06:27:12.248894   29654 status.go:176] ha-162483-m02 status: &{Name:ha-162483-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 06:27:12.248906   29654 status.go:174] checking status of ha-162483-m03 ...
	I1205 06:27:12.250206   29654 status.go:371] ha-162483-m03 host status = "Running" (err=<nil>)
	I1205 06:27:12.250221   29654 host.go:66] Checking if "ha-162483-m03" exists ...
	I1205 06:27:12.252941   29654 main.go:143] libmachine: domain ha-162483-m03 has defined MAC address 52:54:00:fc:5b:fa in network mk-ha-162483
	I1205 06:27:12.253358   29654 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fc:5b:fa", ip: ""} in network mk-ha-162483: {Iface:virbr1 ExpiryTime:2025-12-05 07:23:39 +0000 UTC Type:0 Mac:52:54:00:fc:5b:fa Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-162483-m03 Clientid:01:52:54:00:fc:5b:fa}
	I1205 06:27:12.253384   29654 main.go:143] libmachine: domain ha-162483-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:fc:5b:fa in network mk-ha-162483
	I1205 06:27:12.253507   29654 host.go:66] Checking if "ha-162483-m03" exists ...
	I1205 06:27:12.253720   29654 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:27:12.255754   29654 main.go:143] libmachine: domain ha-162483-m03 has defined MAC address 52:54:00:fc:5b:fa in network mk-ha-162483
	I1205 06:27:12.256193   29654 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fc:5b:fa", ip: ""} in network mk-ha-162483: {Iface:virbr1 ExpiryTime:2025-12-05 07:23:39 +0000 UTC Type:0 Mac:52:54:00:fc:5b:fa Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-162483-m03 Clientid:01:52:54:00:fc:5b:fa}
	I1205 06:27:12.256216   29654 main.go:143] libmachine: domain ha-162483-m03 has defined IP address 192.168.39.15 and MAC address 52:54:00:fc:5b:fa in network mk-ha-162483
	I1205 06:27:12.256361   29654 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/ha-162483-m03/id_rsa Username:docker}
	I1205 06:27:12.344397   29654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:27:12.362848   29654 kubeconfig.go:125] found "ha-162483" server: "https://192.168.39.254:8443"
	I1205 06:27:12.362874   29654 api_server.go:166] Checking apiserver status ...
	I1205 06:27:12.362907   29654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:27:12.383091   29654 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1818/cgroup
	W1205 06:27:12.395219   29654 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1818/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:27:12.395273   29654 ssh_runner.go:195] Run: ls
	I1205 06:27:12.400951   29654 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1205 06:27:12.405735   29654 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1205 06:27:12.405760   29654 status.go:463] ha-162483-m03 apiserver status = Running (err=<nil>)
	I1205 06:27:12.405767   29654 status.go:176] ha-162483-m03 status: &{Name:ha-162483-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 06:27:12.405782   29654 status.go:174] checking status of ha-162483-m04 ...
	I1205 06:27:12.407572   29654 status.go:371] ha-162483-m04 host status = "Running" (err=<nil>)
	I1205 06:27:12.407589   29654 host.go:66] Checking if "ha-162483-m04" exists ...
	I1205 06:27:12.410503   29654 main.go:143] libmachine: domain ha-162483-m04 has defined MAC address 52:54:00:bc:a5:5b in network mk-ha-162483
	I1205 06:27:12.411095   29654 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:a5:5b", ip: ""} in network mk-ha-162483: {Iface:virbr1 ExpiryTime:2025-12-05 07:25:07 +0000 UTC Type:0 Mac:52:54:00:bc:a5:5b Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:ha-162483-m04 Clientid:01:52:54:00:bc:a5:5b}
	I1205 06:27:12.411122   29654 main.go:143] libmachine: domain ha-162483-m04 has defined IP address 192.168.39.158 and MAC address 52:54:00:bc:a5:5b in network mk-ha-162483
	I1205 06:27:12.411244   29654 host.go:66] Checking if "ha-162483-m04" exists ...
	I1205 06:27:12.411524   29654 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:27:12.413421   29654 main.go:143] libmachine: domain ha-162483-m04 has defined MAC address 52:54:00:bc:a5:5b in network mk-ha-162483
	I1205 06:27:12.413761   29654 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:a5:5b", ip: ""} in network mk-ha-162483: {Iface:virbr1 ExpiryTime:2025-12-05 07:25:07 +0000 UTC Type:0 Mac:52:54:00:bc:a5:5b Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:ha-162483-m04 Clientid:01:52:54:00:bc:a5:5b}
	I1205 06:27:12.413777   29654 main.go:143] libmachine: domain ha-162483-m04 has defined IP address 192.168.39.158 and MAC address 52:54:00:bc:a5:5b in network mk-ha-162483
	I1205 06:27:12.413906   29654 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/ha-162483-m04/id_rsa Username:docker}
	I1205 06:27:12.495826   29654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:27:12.515729   29654 status.go:176] ha-162483-m04 status: &{Name:ha-162483-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (83.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (35.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 node start m02 --alsologtostderr -v 5
E1205 06:27:15.152247   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-158571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-162483 node start m02 --alsologtostderr -v 5: (34.396792014s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (35.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (333.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 stop --alsologtostderr -v 5
E1205 06:28:21.332386   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-895947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:28:27.900812   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:29:50.979469   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:30:37.468430   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-895947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:31:05.179315   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-895947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-162483 stop --alsologtostderr -v 5: (3m44.939086904s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 start --wait true --alsologtostderr -v 5
E1205 06:31:47.451267   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-158571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-162483 start --wait true --alsologtostderr -v 5: (1m48.778785473s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (333.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 node delete m03 --alsologtostderr -v 5
E1205 06:33:27.900717   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-162483 node delete m03 --alsologtostderr -v 5: (17.299551575s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (259.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 stop --alsologtostderr -v 5
E1205 06:35:37.468286   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-895947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:36:47.450518   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-158571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-162483 stop --alsologtostderr -v 5: (4m19.795469562s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-162483 status --alsologtostderr -v 5: exit status 7 (62.231867ms)

                                                
                                                
-- stdout --
	ha-162483
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-162483-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-162483-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:38:01.005058   32797 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:38:01.005148   32797 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:38:01.005156   32797 out.go:374] Setting ErrFile to fd 2...
	I1205 06:38:01.005160   32797 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:38:01.005380   32797 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12744/.minikube/bin
	I1205 06:38:01.005577   32797 out.go:368] Setting JSON to false
	I1205 06:38:01.005604   32797 mustload.go:66] Loading cluster: ha-162483
	I1205 06:38:01.005728   32797 notify.go:221] Checking for updates...
	I1205 06:38:01.006130   32797 config.go:182] Loaded profile config "ha-162483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:38:01.006152   32797 status.go:174] checking status of ha-162483 ...
	I1205 06:38:01.008581   32797 status.go:371] ha-162483 host status = "Stopped" (err=<nil>)
	I1205 06:38:01.008598   32797 status.go:384] host is not running, skipping remaining checks
	I1205 06:38:01.008605   32797 status.go:176] ha-162483 status: &{Name:ha-162483 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 06:38:01.008626   32797 status.go:174] checking status of ha-162483-m02 ...
	I1205 06:38:01.010162   32797 status.go:371] ha-162483-m02 host status = "Stopped" (err=<nil>)
	I1205 06:38:01.010178   32797 status.go:384] host is not running, skipping remaining checks
	I1205 06:38:01.010184   32797 status.go:176] ha-162483-m02 status: &{Name:ha-162483-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 06:38:01.010204   32797 status.go:174] checking status of ha-162483-m04 ...
	I1205 06:38:01.011720   32797 status.go:371] ha-162483-m04 host status = "Stopped" (err=<nil>)
	I1205 06:38:01.011737   32797 status.go:384] host is not running, skipping remaining checks
	I1205 06:38:01.011744   32797 status.go:176] ha-162483-m04 status: &{Name:ha-162483-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (259.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (105.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1205 06:38:10.515937   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-158571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:38:27.901183   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-162483 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m44.548246115s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (105.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (71.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 node add --control-plane --alsologtostderr -v 5
E1205 06:40:37.468436   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-895947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-162483 node add --control-plane --alsologtostderr -v 5: (1m10.910491837s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-162483 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (71.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.66s)

                                                
                                    
x
+
TestJSONOutput/start/Command (74.57s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-661760 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1205 06:41:47.452647   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-158571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:42:00.543077   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-895947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-661760 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m14.572474164s)
--- PASS: TestJSONOutput/start/Command (74.57s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-661760 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-661760 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.09s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-661760 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-661760 --output=json --user=testUser: (8.091553932s)
--- PASS: TestJSONOutput/stop/Command (8.09s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-031293 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-031293 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (74.966392ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1cfaf35a-60ae-48bf-8368-87aa64b5f6fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-031293] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ffdd2fa0-3b03-4c92-b523-55f4b0f9706b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21997"}}
	{"specversion":"1.0","id":"932dc64f-2578-402c-8cec-133dc5fe3cae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cfe6135a-de95-45b6-8dde-8066a6dcb00e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21997-12744/kubeconfig"}}
	{"specversion":"1.0","id":"1a45530f-5f9b-488f-8fc4-de0b78e67b42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12744/.minikube"}}
	{"specversion":"1.0","id":"15f12e48-76d4-4aef-9ab9-f14aafe339d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"234482ad-43a2-49b2-96a0-ae7bf5a0c104","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"963b8a7e-7bc5-45b2-9d63-8a683b6b8994","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-031293" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-031293
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (76.15s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-452422 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-452422 --driver=kvm2  --container-runtime=crio: (34.639045181s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-455214 --driver=kvm2  --container-runtime=crio
E1205 06:43:27.901350   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-455214 --driver=kvm2  --container-runtime=crio: (38.96657811s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-452422
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-455214
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-455214" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-455214
helpers_test.go:175: Cleaning up "first-452422" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-452422
--- PASS: TestMinikubeProfile (76.15s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (20.23s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-752205 --memory=3072 --mount-string /tmp/TestMountStartserial257085020/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-752205 --memory=3072 --mount-string /tmp/TestMountStartserial257085020/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.233472938s)
--- PASS: TestMountStart/serial/StartWithMountFirst (20.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-752205 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-752205 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (19.52s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-768749 --memory=3072 --mount-string /tmp/TestMountStartserial257085020/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-768749 --memory=3072 --mount-string /tmp/TestMountStartserial257085020/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (18.519857976s)
--- PASS: TestMountStart/serial/StartWithMountSecond (19.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-768749 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-768749 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-752205 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-768749 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-768749 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-768749
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-768749: (1.24106074s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (17.92s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-768749
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-768749: (16.917020473s)
--- PASS: TestMountStart/serial/RestartStopped (17.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-768749 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-768749 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (93.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-916216 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1205 06:45:37.468985   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-895947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-916216 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m32.89208608s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (93.21s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-916216 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-916216 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-916216 -- rollout status deployment/busybox: (4.217321524s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-916216 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-916216 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-916216 -- exec busybox-7b57f96db7-96b5j -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-916216 -- exec busybox-7b57f96db7-hz7fq -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-916216 -- exec busybox-7b57f96db7-96b5j -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-916216 -- exec busybox-7b57f96db7-hz7fq -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-916216 -- exec busybox-7b57f96db7-96b5j -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-916216 -- exec busybox-7b57f96db7-hz7fq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.82s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-916216 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-916216 -- exec busybox-7b57f96db7-96b5j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-916216 -- exec busybox-7b57f96db7-96b5j -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-916216 -- exec busybox-7b57f96db7-hz7fq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-916216 -- exec busybox-7b57f96db7-hz7fq -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-916216 -v=5 --alsologtostderr
E1205 06:46:30.980833   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:46:47.449523   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-158571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-916216 -v=5 --alsologtostderr: (41.538703853s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.98s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-916216 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 cp testdata/cp-test.txt multinode-916216:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 ssh -n multinode-916216 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 cp multinode-916216:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3168498416/001/cp-test_multinode-916216.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 ssh -n multinode-916216 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 cp multinode-916216:/home/docker/cp-test.txt multinode-916216-m02:/home/docker/cp-test_multinode-916216_multinode-916216-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 ssh -n multinode-916216 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 ssh -n multinode-916216-m02 "sudo cat /home/docker/cp-test_multinode-916216_multinode-916216-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 cp multinode-916216:/home/docker/cp-test.txt multinode-916216-m03:/home/docker/cp-test_multinode-916216_multinode-916216-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 ssh -n multinode-916216 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 ssh -n multinode-916216-m03 "sudo cat /home/docker/cp-test_multinode-916216_multinode-916216-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 cp testdata/cp-test.txt multinode-916216-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 ssh -n multinode-916216-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 cp multinode-916216-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3168498416/001/cp-test_multinode-916216-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 ssh -n multinode-916216-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 cp multinode-916216-m02:/home/docker/cp-test.txt multinode-916216:/home/docker/cp-test_multinode-916216-m02_multinode-916216.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 ssh -n multinode-916216-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 ssh -n multinode-916216 "sudo cat /home/docker/cp-test_multinode-916216-m02_multinode-916216.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 cp multinode-916216-m02:/home/docker/cp-test.txt multinode-916216-m03:/home/docker/cp-test_multinode-916216-m02_multinode-916216-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 ssh -n multinode-916216-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 ssh -n multinode-916216-m03 "sudo cat /home/docker/cp-test_multinode-916216-m02_multinode-916216-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 cp testdata/cp-test.txt multinode-916216-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 ssh -n multinode-916216-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 cp multinode-916216-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3168498416/001/cp-test_multinode-916216-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 ssh -n multinode-916216-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 cp multinode-916216-m03:/home/docker/cp-test.txt multinode-916216:/home/docker/cp-test_multinode-916216-m03_multinode-916216.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 ssh -n multinode-916216-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 ssh -n multinode-916216 "sudo cat /home/docker/cp-test_multinode-916216-m03_multinode-916216.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 cp multinode-916216-m03:/home/docker/cp-test.txt multinode-916216-m02:/home/docker/cp-test_multinode-916216-m03_multinode-916216-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 ssh -n multinode-916216-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 ssh -n multinode-916216-m02 "sudo cat /home/docker/cp-test_multinode-916216-m03_multinode-916216-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.82s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-916216 node stop m03: (1.511944902s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-916216 status: exit status 7 (315.132292ms)

                                                
                                                
-- stdout --
	multinode-916216
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-916216-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-916216-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-916216 status --alsologtostderr: exit status 7 (323.739526ms)

                                                
                                                
-- stdout --
	multinode-916216
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-916216-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-916216-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:47:14.542156   38357 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:47:14.542394   38357 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:47:14.542403   38357 out.go:374] Setting ErrFile to fd 2...
	I1205 06:47:14.542407   38357 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:47:14.542579   38357 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12744/.minikube/bin
	I1205 06:47:14.542801   38357 out.go:368] Setting JSON to false
	I1205 06:47:14.542829   38357 mustload.go:66] Loading cluster: multinode-916216
	I1205 06:47:14.542904   38357 notify.go:221] Checking for updates...
	I1205 06:47:14.543200   38357 config.go:182] Loaded profile config "multinode-916216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:47:14.543215   38357 status.go:174] checking status of multinode-916216 ...
	I1205 06:47:14.545224   38357 status.go:371] multinode-916216 host status = "Running" (err=<nil>)
	I1205 06:47:14.545241   38357 host.go:66] Checking if "multinode-916216" exists ...
	I1205 06:47:14.547620   38357 main.go:143] libmachine: domain multinode-916216 has defined MAC address 52:54:00:5a:48:d4 in network mk-multinode-916216
	I1205 06:47:14.548022   38357 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:5a:48:d4", ip: ""} in network mk-multinode-916216: {Iface:virbr1 ExpiryTime:2025-12-05 07:44:59 +0000 UTC Type:0 Mac:52:54:00:5a:48:d4 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:multinode-916216 Clientid:01:52:54:00:5a:48:d4}
	I1205 06:47:14.548046   38357 main.go:143] libmachine: domain multinode-916216 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:48:d4 in network mk-multinode-916216
	I1205 06:47:14.548166   38357 host.go:66] Checking if "multinode-916216" exists ...
	I1205 06:47:14.548331   38357 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:47:14.550706   38357 main.go:143] libmachine: domain multinode-916216 has defined MAC address 52:54:00:5a:48:d4 in network mk-multinode-916216
	I1205 06:47:14.551068   38357 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:5a:48:d4", ip: ""} in network mk-multinode-916216: {Iface:virbr1 ExpiryTime:2025-12-05 07:44:59 +0000 UTC Type:0 Mac:52:54:00:5a:48:d4 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:multinode-916216 Clientid:01:52:54:00:5a:48:d4}
	I1205 06:47:14.551114   38357 main.go:143] libmachine: domain multinode-916216 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:48:d4 in network mk-multinode-916216
	I1205 06:47:14.551261   38357 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/multinode-916216/id_rsa Username:docker}
	I1205 06:47:14.629083   38357 ssh_runner.go:195] Run: systemctl --version
	I1205 06:47:14.634908   38357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:47:14.651140   38357 kubeconfig.go:125] found "multinode-916216" server: "https://192.168.39.114:8443"
	I1205 06:47:14.651170   38357 api_server.go:166] Checking apiserver status ...
	I1205 06:47:14.651202   38357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:14.668869   38357 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1342/cgroup
	W1205 06:47:14.686225   38357 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1342/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:47:14.686283   38357 ssh_runner.go:195] Run: ls
	I1205 06:47:14.692732   38357 api_server.go:253] Checking apiserver healthz at https://192.168.39.114:8443/healthz ...
	I1205 06:47:14.699353   38357 api_server.go:279] https://192.168.39.114:8443/healthz returned 200:
	ok
	I1205 06:47:14.699374   38357 status.go:463] multinode-916216 apiserver status = Running (err=<nil>)
	I1205 06:47:14.699382   38357 status.go:176] multinode-916216 status: &{Name:multinode-916216 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 06:47:14.699406   38357 status.go:174] checking status of multinode-916216-m02 ...
	I1205 06:47:14.700850   38357 status.go:371] multinode-916216-m02 host status = "Running" (err=<nil>)
	I1205 06:47:14.700864   38357 host.go:66] Checking if "multinode-916216-m02" exists ...
	I1205 06:47:14.703254   38357 main.go:143] libmachine: domain multinode-916216-m02 has defined MAC address 52:54:00:a3:be:15 in network mk-multinode-916216
	I1205 06:47:14.703653   38357 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a3:be:15", ip: ""} in network mk-multinode-916216: {Iface:virbr1 ExpiryTime:2025-12-05 07:45:50 +0000 UTC Type:0 Mac:52:54:00:a3:be:15 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:multinode-916216-m02 Clientid:01:52:54:00:a3:be:15}
	I1205 06:47:14.703675   38357 main.go:143] libmachine: domain multinode-916216-m02 has defined IP address 192.168.39.110 and MAC address 52:54:00:a3:be:15 in network mk-multinode-916216
	I1205 06:47:14.703806   38357 host.go:66] Checking if "multinode-916216-m02" exists ...
	I1205 06:47:14.704043   38357 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:47:14.706061   38357 main.go:143] libmachine: domain multinode-916216-m02 has defined MAC address 52:54:00:a3:be:15 in network mk-multinode-916216
	I1205 06:47:14.706386   38357 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a3:be:15", ip: ""} in network mk-multinode-916216: {Iface:virbr1 ExpiryTime:2025-12-05 07:45:50 +0000 UTC Type:0 Mac:52:54:00:a3:be:15 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:multinode-916216-m02 Clientid:01:52:54:00:a3:be:15}
	I1205 06:47:14.706405   38357 main.go:143] libmachine: domain multinode-916216-m02 has defined IP address 192.168.39.110 and MAC address 52:54:00:a3:be:15 in network mk-multinode-916216
	I1205 06:47:14.706535   38357 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12744/.minikube/machines/multinode-916216-m02/id_rsa Username:docker}
	I1205 06:47:14.788850   38357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:47:14.806215   38357 status.go:176] multinode-916216-m02 status: &{Name:multinode-916216-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1205 06:47:14.806242   38357 status.go:174] checking status of multinode-916216-m03 ...
	I1205 06:47:14.807793   38357 status.go:371] multinode-916216-m03 host status = "Stopped" (err=<nil>)
	I1205 06:47:14.807815   38357 status.go:384] host is not running, skipping remaining checks
	I1205 06:47:14.807823   38357 status.go:176] multinode-916216-m03 status: &{Name:multinode-916216-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-916216 node start m03 -v=5 --alsologtostderr: (39.547834433s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.03s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (294.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-916216
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-916216
E1205 06:48:27.900940   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:50:37.469014   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-895947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-916216: (2m46.719360557s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-916216 --wait=true -v=5 --alsologtostderr
E1205 06:51:47.449215   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-158571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-916216 --wait=true -v=5 --alsologtostderr: (2m7.961052387s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-916216
--- PASS: TestMultiNode/serial/RestartKeepsNodes (294.80s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-916216 node delete m03: (2.144727322s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.58s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (157.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 stop
E1205 06:53:27.900955   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:54:50.519351   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-158571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-916216 stop: (2m37.682039244s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-916216 status: exit status 7 (62.700846ms)

                                                
                                                
-- stdout --
	multinode-916216
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-916216-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-916216 status --alsologtostderr: exit status 7 (61.731194ms)

                                                
                                                
-- stdout --
	multinode-916216
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-916216-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:55:30.029396   40703 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:55:30.029630   40703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:55:30.029639   40703 out.go:374] Setting ErrFile to fd 2...
	I1205 06:55:30.029643   40703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:55:30.029820   40703 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12744/.minikube/bin
	I1205 06:55:30.030010   40703 out.go:368] Setting JSON to false
	I1205 06:55:30.030035   40703 mustload.go:66] Loading cluster: multinode-916216
	I1205 06:55:30.030176   40703 notify.go:221] Checking for updates...
	I1205 06:55:30.030381   40703 config.go:182] Loaded profile config "multinode-916216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:55:30.030394   40703 status.go:174] checking status of multinode-916216 ...
	I1205 06:55:30.032473   40703 status.go:371] multinode-916216 host status = "Stopped" (err=<nil>)
	I1205 06:55:30.032488   40703 status.go:384] host is not running, skipping remaining checks
	I1205 06:55:30.032493   40703 status.go:176] multinode-916216 status: &{Name:multinode-916216 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 06:55:30.032508   40703 status.go:174] checking status of multinode-916216-m02 ...
	I1205 06:55:30.033738   40703 status.go:371] multinode-916216-m02 host status = "Stopped" (err=<nil>)
	I1205 06:55:30.033753   40703 status.go:384] host is not running, skipping remaining checks
	I1205 06:55:30.033757   40703 status.go:176] multinode-916216-m02 status: &{Name:multinode-916216-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (157.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (82.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-916216 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1205 06:55:37.468558   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-895947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:56:47.449584   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-158571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-916216 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m21.623704337s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916216 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (82.08s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (39.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-916216
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-916216-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-916216-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (79.813288ms)

                                                
                                                
-- stdout --
	* [multinode-916216-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-12744/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12744/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-916216-m02' is duplicated with machine name 'multinode-916216-m02' in profile 'multinode-916216'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-916216-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-916216-m03 --driver=kvm2  --container-runtime=crio: (37.891141695s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-916216
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-916216: exit status 80 (200.586633ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-916216 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-916216-m03 already exists in multinode-916216-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-916216-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (39.04s)

                                                
                                    
x
+
TestScheduledStopUnix (110.31s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-992335 --memory=3072 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-992335 --memory=3072 --driver=kvm2  --container-runtime=crio: (38.714022022s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-992335 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1205 07:00:08.542171   42895 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:00:08.542505   42895 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:00:08.542515   42895 out.go:374] Setting ErrFile to fd 2...
	I1205 07:00:08.542519   42895 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:00:08.542709   42895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12744/.minikube/bin
	I1205 07:00:08.542958   42895 out.go:368] Setting JSON to false
	I1205 07:00:08.543075   42895 mustload.go:66] Loading cluster: scheduled-stop-992335
	I1205 07:00:08.543394   42895 config.go:182] Loaded profile config "scheduled-stop-992335": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:00:08.543458   42895 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/scheduled-stop-992335/config.json ...
	I1205 07:00:08.543630   42895 mustload.go:66] Loading cluster: scheduled-stop-992335
	I1205 07:00:08.543734   42895 config.go:182] Loaded profile config "scheduled-stop-992335": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-992335 -n scheduled-stop-992335
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-992335 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1205 07:00:08.822562   42949 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:00:08.822670   42949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:00:08.822676   42949 out.go:374] Setting ErrFile to fd 2...
	I1205 07:00:08.822693   42949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:00:08.822893   42949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12744/.minikube/bin
	I1205 07:00:08.823206   42949 out.go:368] Setting JSON to false
	I1205 07:00:08.823427   42949 daemonize_unix.go:73] killing process 42930 as it is an old scheduled stop
	I1205 07:00:08.823533   42949 mustload.go:66] Loading cluster: scheduled-stop-992335
	I1205 07:00:08.823993   42949 config.go:182] Loaded profile config "scheduled-stop-992335": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:00:08.824090   42949 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/scheduled-stop-992335/config.json ...
	I1205 07:00:08.824333   42949 mustload.go:66] Loading cluster: scheduled-stop-992335
	I1205 07:00:08.824478   42949 config.go:182] Loaded profile config "scheduled-stop-992335": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1205 07:00:08.828679   16702 retry.go:31] will retry after 110.776µs: open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/scheduled-stop-992335/pid: no such file or directory
I1205 07:00:08.829886   16702 retry.go:31] will retry after 96.259µs: open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/scheduled-stop-992335/pid: no such file or directory
I1205 07:00:08.831050   16702 retry.go:31] will retry after 185.115µs: open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/scheduled-stop-992335/pid: no such file or directory
I1205 07:00:08.832228   16702 retry.go:31] will retry after 354.243µs: open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/scheduled-stop-992335/pid: no such file or directory
I1205 07:00:08.833354   16702 retry.go:31] will retry after 556.998µs: open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/scheduled-stop-992335/pid: no such file or directory
I1205 07:00:08.834477   16702 retry.go:31] will retry after 1.053626ms: open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/scheduled-stop-992335/pid: no such file or directory
I1205 07:00:08.835598   16702 retry.go:31] will retry after 1.06188ms: open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/scheduled-stop-992335/pid: no such file or directory
I1205 07:00:08.836783   16702 retry.go:31] will retry after 931.128µs: open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/scheduled-stop-992335/pid: no such file or directory
I1205 07:00:08.837904   16702 retry.go:31] will retry after 3.666251ms: open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/scheduled-stop-992335/pid: no such file or directory
I1205 07:00:08.842101   16702 retry.go:31] will retry after 1.998494ms: open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/scheduled-stop-992335/pid: no such file or directory
I1205 07:00:08.844303   16702 retry.go:31] will retry after 3.468046ms: open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/scheduled-stop-992335/pid: no such file or directory
I1205 07:00:08.848522   16702 retry.go:31] will retry after 11.048338ms: open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/scheduled-stop-992335/pid: no such file or directory
I1205 07:00:08.859703   16702 retry.go:31] will retry after 13.577454ms: open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/scheduled-stop-992335/pid: no such file or directory
I1205 07:00:08.873981   16702 retry.go:31] will retry after 13.053272ms: open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/scheduled-stop-992335/pid: no such file or directory
I1205 07:00:08.887202   16702 retry.go:31] will retry after 16.616063ms: open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/scheduled-stop-992335/pid: no such file or directory
I1205 07:00:08.904521   16702 retry.go:31] will retry after 61.794055ms: open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/scheduled-stop-992335/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-992335 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-992335 -n scheduled-stop-992335
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-992335
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-992335 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1205 07:00:34.523786   43097 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:00:34.524018   43097 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:00:34.524026   43097 out.go:374] Setting ErrFile to fd 2...
	I1205 07:00:34.524030   43097 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:00:34.524211   43097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12744/.minikube/bin
	I1205 07:00:34.524422   43097 out.go:368] Setting JSON to false
	I1205 07:00:34.524491   43097 mustload.go:66] Loading cluster: scheduled-stop-992335
	I1205 07:00:34.524779   43097 config.go:182] Loaded profile config "scheduled-stop-992335": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:00:34.524844   43097 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/scheduled-stop-992335/config.json ...
	I1205 07:00:34.525024   43097 mustload.go:66] Loading cluster: scheduled-stop-992335
	I1205 07:00:34.525112   43097 config.go:182] Loaded profile config "scheduled-stop-992335": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
E1205 07:00:37.468588   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-895947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-992335
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-992335: exit status 7 (58.678057ms)

                                                
                                                
-- stdout --
	scheduled-stop-992335
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-992335 -n scheduled-stop-992335
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-992335 -n scheduled-stop-992335: exit status 7 (59.749222ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-992335" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-992335
--- PASS: TestScheduledStopUnix (110.31s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (390.71s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1144913728 start -p running-upgrade-228729 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
E1205 07:01:47.449880   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-158571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1144913728 start -p running-upgrade-228729 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m30.621963003s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-228729 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-228729 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (4m58.406776252s)
helpers_test.go:175: Cleaning up "running-upgrade-228729" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-228729
--- PASS: TestRunningBinaryUpgrade (390.71s)

                                                
                                    
x
+
TestKubernetesUpgrade (503.18s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-256837 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-256837 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (56.405157372s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-256837
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-256837: (2.221191575s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-256837 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-256837 status --format={{.Host}}: exit status 7 (66.82299ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-256837 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-256837 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (56.969141696s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-256837 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-256837 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-256837 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (75.712782ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-256837] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-12744/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12744/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-256837
	    minikube start -p kubernetes-upgrade-256837 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2568372 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-256837 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-256837 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1205 07:03:27.901255   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-256837 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (6m26.409599867s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-256837" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-256837
--- PASS: TestKubernetesUpgrade (503.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-138934 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-138934 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (91.031994ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-138934] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-12744/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12744/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (75.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-138934 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-138934 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m15.174760392s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-138934 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (75.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (25.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-138934 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-138934 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (24.027775616s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-138934 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-138934 status -o json: exit status 2 (233.147719ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-138934","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-138934
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-138934: (1.068922847s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (25.33s)

                                                
                                    
x
+
TestPause/serial/Start (97.77s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-462111 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-462111 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m37.774807963s)
--- PASS: TestPause/serial/Start (97.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (45.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-138934 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1205 07:03:10.983143   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-138934 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (45.32634021s)
--- PASS: TestNoKubernetes/serial/Start (45.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21997-12744/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-138934 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-138934 "sudo systemctl is-active --quiet service kubelet": exit status 1 (159.690951ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (16.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (15.542588712s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (16.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-138934
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-138934: (1.270603516s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (17.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-138934 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-138934 --driver=kvm2  --container-runtime=crio: (17.213236429s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (17.21s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (33.58s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-462111 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-462111 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (33.556300895s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (33.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-138934 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-138934 "sudo systemctl is-active --quiet service kubelet": exit status 1 (149.997499ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.15s)

                                                
                                    
x
+
TestPause/serial/Pause (1s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-462111 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-462111 --alsologtostderr -v=5: (1.00272738s)
--- PASS: TestPause/serial/Pause (1.00s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.22s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-462111 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-462111 --output=json --layout=cluster: exit status 2 (222.488824ms)

                                                
                                                
-- stdout --
	{"Name":"pause-462111","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-462111","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.22s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-462111 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.74s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.83s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-462111 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.83s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.83s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-462111 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.83s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.49s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-550303 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-550303 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (115.620095ms)

                                                
                                                
-- stdout --
	* [false-550303] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-12744/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12744/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 07:05:00.293617   46881 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:05:00.293757   46881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:05:00.293769   46881 out.go:374] Setting ErrFile to fd 2...
	I1205 07:05:00.293776   46881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:05:00.293995   46881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12744/.minikube/bin
	I1205 07:05:00.294504   46881 out.go:368] Setting JSON to false
	I1205 07:05:00.295385   46881 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6445,"bootTime":1764911855,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 07:05:00.295437   46881 start.go:143] virtualization: kvm guest
	I1205 07:05:00.297496   46881 out.go:179] * [false-550303] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1205 07:05:00.298845   46881 notify.go:221] Checking for updates...
	I1205 07:05:00.298878   46881 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:05:00.300634   46881 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:05:00.302754   46881 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12744/kubeconfig
	I1205 07:05:00.304053   46881 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12744/.minikube
	I1205 07:05:00.305295   46881 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 07:05:00.306545   46881 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:05:00.308192   46881 config.go:182] Loaded profile config "force-systemd-env-434541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:05:00.308308   46881 config.go:182] Loaded profile config "kubernetes-upgrade-256837": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:05:00.308408   46881 config.go:182] Loaded profile config "running-upgrade-228729": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1205 07:05:00.308517   46881 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:05:00.345036   46881 out.go:179] * Using the kvm2 driver based on user configuration
	I1205 07:05:00.346418   46881 start.go:309] selected driver: kvm2
	I1205 07:05:00.346435   46881 start.go:927] validating driver "kvm2" against <nil>
	I1205 07:05:00.346447   46881 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:05:00.348364   46881 out.go:203] 
	W1205 07:05:00.349596   46881 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1205 07:05:00.350678   46881 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-550303 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-550303

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-550303

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-550303

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-550303

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-550303

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-550303

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-550303

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-550303

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-550303

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-550303

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-550303

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-550303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-550303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-550303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-550303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-550303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-550303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-550303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-550303" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-550303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-550303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-550303" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-12744/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 05 Dec 2025 07:03:14 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.132:8443
name: kubernetes-upgrade-256837
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-12744/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 05 Dec 2025 07:03:30 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.218:8443
name: running-upgrade-228729
contexts:
- context:
cluster: kubernetes-upgrade-256837
extensions:
- extension:
last-update: Fri, 05 Dec 2025 07:03:14 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-256837
name: kubernetes-upgrade-256837
- context:
cluster: running-upgrade-228729
user: running-upgrade-228729
name: running-upgrade-228729
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-256837
user:
client-certificate: /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/kubernetes-upgrade-256837/client.crt
client-key: /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/kubernetes-upgrade-256837/client.key
- name: running-upgrade-228729
user:
client-certificate: /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/running-upgrade-228729/client.crt
client-key: /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/running-upgrade-228729/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-550303

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-550303"

                                                
                                                
----------------------- debugLogs end: false-550303 [took: 3.500541933s] --------------------------------
helpers_test.go:175: Cleaning up "false-550303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-550303
--- PASS: TestNetworkPlugins/group/false (3.80s)

                                                
                                    
x
+
TestISOImage/Setup (19.69s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-902352 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-902352 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.687952678s)
--- PASS: TestISOImage/Setup (19.69s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-902352 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-902352 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-902352 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-902352 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-902352 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-902352 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-902352 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-902352 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-902352 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-902352 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-902352 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (72.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.1722660939 start -p stopped-upgrade-110508 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
E1205 07:06:47.449227   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-158571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.1722660939 start -p stopped-upgrade-110508 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (39.700881826s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.1722660939 -p stopped-upgrade-110508 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.1722660939 -p stopped-upgrade-110508 stop: (1.833447201s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-110508 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-110508 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (30.524507528s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (72.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-110508
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-110508: (1.137129449s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (104.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-445695 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-445695 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m44.34478887s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (104.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (89.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-516675 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1205 07:08:27.900813   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-516675 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m29.803138154s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (89.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-445695 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1c366d5b-6dd9-4602-9065-ede9eb17d3da] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1c366d5b-6dd9-4602-9065-ede9eb17d3da] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004198167s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-445695 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (82.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-887442 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-887442 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m22.73643082s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (82.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-445695 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-445695 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.078922395s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-445695 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (85.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-445695 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-445695 --alsologtostderr -v=3: (1m25.349196168s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (85.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-516675 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c3122c43-4e77-4e92-98b4-a5d209caa8a7] Pending
helpers_test.go:352: "busybox" [c3122c43-4e77-4e92-98b4-a5d209caa8a7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c3122c43-4e77-4e92-98b4-a5d209caa8a7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.003592672s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-516675 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-516675 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-516675 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (85.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-516675 --alsologtostderr -v=3
E1205 07:10:37.468610   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-895947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-516675 --alsologtostderr -v=3: (1m25.814268378s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (85.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-887442 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4d3c348b-da19-4f79-b390-a243fe13b2e9] Pending
helpers_test.go:352: "busybox" [4d3c348b-da19-4f79-b390-a243fe13b2e9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4d3c348b-da19-4f79-b390-a243fe13b2e9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.00351591s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-887442 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-445695 -n old-k8s-version-445695
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-445695 -n old-k8s-version-445695: exit status 7 (60.875552ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-445695 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (44.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-445695 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-445695 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (44.246163625s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-445695 -n old-k8s-version-445695
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (44.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-887442 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-887442 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (90.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-887442 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-887442 --alsologtostderr -v=3: (1m30.682254737s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (90.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-516675 -n no-preload-516675
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-516675 -n no-preload-516675: exit status 7 (82.96382ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-516675 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (55.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-516675 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1205 07:11:30.520718   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-158571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:11:47.448771   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-158571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-516675 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (55.451940195s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-516675 -n no-preload-516675
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (55.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-r46vp" [04363ff9-4957-486c-bac5-08a851cc7fc8] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-r46vp" [04363ff9-4957-486c-bac5-08a851cc7fc8] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.004134837s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-r46vp" [04363ff9-4957-486c-bac5-08a851cc7fc8] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00374922s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-445695 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-445695 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-445695 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-445695 -n old-k8s-version-445695
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-445695 -n old-k8s-version-445695: exit status 2 (218.816285ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-445695 -n old-k8s-version-445695
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-445695 -n old-k8s-version-445695: exit status 2 (215.464195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-445695 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-445695 -n old-k8s-version-445695
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-445695 -n old-k8s-version-445695
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-336856 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-336856 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (56.317157111s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-vxlbn" [5d2b3307-5ce7-4e61-88c6-4c4d01ec570f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004156735s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-vxlbn" [5d2b3307-5ce7-4e61-88c6-4c4d01ec570f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003791333s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-516675 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-516675 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-516675 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-516675 --alsologtostderr -v=1: (1.364399027s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-516675 -n no-preload-516675
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-516675 -n no-preload-516675: exit status 2 (227.271007ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-516675 -n no-preload-516675
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-516675 -n no-preload-516675: exit status 2 (236.223895ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-516675 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-516675 -n no-preload-516675
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-516675 -n no-preload-516675
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (55.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-033308 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-033308 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (55.281809373s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (55.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-887442 -n embed-certs-887442
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-887442 -n embed-certs-887442: exit status 7 (81.387829ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-887442 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (58.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-887442 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-887442 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (58.016526207s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-887442 -n embed-certs-887442
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (58.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-336856 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [742733ef-3d3c-4a6d-a30e-09af869c40cf] Pending
helpers_test.go:352: "busybox" [742733ef-3d3c-4a6d-a30e-09af869c40cf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [742733ef-3d3c-4a6d-a30e-09af869c40cf] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004309164s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-336856 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-336856 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-336856 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.093572663s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-336856 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (70.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-336856 --alsologtostderr -v=3
E1205 07:13:27.900772   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-336856 --alsologtostderr -v=3: (1m10.504121699s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (70.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-033308 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-033308 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.069219318s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (87.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-033308 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-033308 --alsologtostderr -v=3: (1m27.205399227s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (87.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bm6tw" [db117e05-e731-4f48-b267-d2b2ac154b8f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004969627s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bm6tw" [db117e05-e731-4f48-b267-d2b2ac154b8f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003428511s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-887442 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-887442 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-887442 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-887442 -n embed-certs-887442
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-887442 -n embed-certs-887442: exit status 2 (224.695502ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-887442 -n embed-certs-887442
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-887442 -n embed-certs-887442: exit status 2 (226.527404ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-887442 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-887442 -n embed-certs-887442
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-887442 -n embed-certs-887442
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (55.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-550303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E1205 07:14:35.318678   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/old-k8s-version-445695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:14:35.325164   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/old-k8s-version-445695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:14:35.336572   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/old-k8s-version-445695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:14:35.358119   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/old-k8s-version-445695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-550303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (55.595357976s)
--- PASS: TestNetworkPlugins/group/auto/Start (55.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-336856 -n default-k8s-diff-port-336856
E1205 07:14:35.399532   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/old-k8s-version-445695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-336856 -n default-k8s-diff-port-336856: exit status 7 (62.431605ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-336856 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1205 07:14:35.481119   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/old-k8s-version-445695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (107.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-336856 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
E1205 07:14:35.643414   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/old-k8s-version-445695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:14:35.964980   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/old-k8s-version-445695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:14:36.607015   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/old-k8s-version-445695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:14:37.889081   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/old-k8s-version-445695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:14:40.450809   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/old-k8s-version-445695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:14:45.573094   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/old-k8s-version-445695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:14:51.677748   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/no-preload-516675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:14:51.684125   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/no-preload-516675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:14:51.696274   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/no-preload-516675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:14:51.717541   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/no-preload-516675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:14:51.758793   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/no-preload-516675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:14:51.840308   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/no-preload-516675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:14:52.002005   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/no-preload-516675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:14:52.323742   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/no-preload-516675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:14:52.966037   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/no-preload-516675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:14:54.247935   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/no-preload-516675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:14:55.814942   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/old-k8s-version-445695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:14:56.809840   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/no-preload-516675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-336856 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m46.982832819s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-336856 -n default-k8s-diff-port-336856
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (107.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-550303 "pgrep -a kubelet"
I1205 07:14:58.329614   16702 config.go:182] Loaded profile config "auto-550303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-550303 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2zfqm" [1a8cbbe4-98f4-424d-a63e-b282af927814] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1205 07:15:01.932080   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/no-preload-516675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-2zfqm" [1a8cbbe4-98f4-424d-a63e-b282af927814] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004130772s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-033308 -n newest-cni-033308
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-033308 -n newest-cni-033308: exit status 7 (61.620942ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-033308 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (45.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-033308 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-033308 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (44.896753647s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-033308 -n newest-cni-033308
start_stop_delete_test.go:260: (dbg) Done: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-033308 -n newest-cni-033308: (1.062315329s)
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (45.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-550303 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-550303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-550303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (58.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-550303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E1205 07:15:32.655335   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/no-preload-516675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:15:37.468552   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-895947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-550303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (58.873421801s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (58.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-033308 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-033308 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-033308 --alsologtostderr -v=1: (1.182577361s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-033308 -n newest-cni-033308
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-033308 -n newest-cni-033308: exit status 2 (227.127756ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-033308 -n newest-cni-033308
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-033308 -n newest-cni-033308: exit status 2 (233.791218ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-033308 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-033308 --alsologtostderr -v=1: (1.137095023s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-033308 -n newest-cni-033308
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-033308 -n newest-cni-033308
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (74.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-550303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E1205 07:15:57.259419   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/old-k8s-version-445695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:16:13.617375   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/no-preload-516675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-550303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m14.091546176s)
--- PASS: TestNetworkPlugins/group/calico/Start (74.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-kng79" [9c284c39-c3c4-4801-a2e2-815d5aea5d8f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003898514s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sx5db" [53d9717e-b25f-4c71-a2f4-e892e829a34d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003091729s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-550303 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sx5db" [53d9717e-b25f-4c71-a2f4-e892e829a34d] Running
I1205 07:16:28.881343   16702 config.go:182] Loaded profile config "kindnet-550303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004399654s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-336856 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-550303 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lrr8c" [1b6546b5-b9a1-4023-9f27-f841c999c35a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lrr8c" [1b6546b5-b9a1-4023-9f27-f841c999c35a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004926381s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-336856 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-336856 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-336856 -n default-k8s-diff-port-336856
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-336856 -n default-k8s-diff-port-336856: exit status 2 (224.326062ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-336856 -n default-k8s-diff-port-336856
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-336856 -n default-k8s-diff-port-336856: exit status 2 (231.324312ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-336856 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-336856 -n default-k8s-diff-port-336856
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-336856 -n default-k8s-diff-port-336856
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (67.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-550303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-550303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m7.570009055s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (67.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-550303 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-550303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-550303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (85.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-550303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-550303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m25.232220377s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (85.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-9kfcl" [b13d6b32-4e4d-485d-b4ab-b81c2c8e1b7f] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-9kfcl" [b13d6b32-4e4d-485d-b4ab-b81c2c8e1b7f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.055142849s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-550303 "pgrep -a kubelet"
I1205 07:17:16.817826   16702 config.go:182] Loaded profile config "calico-550303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-550303 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rplxc" [a70f9327-cb49-4bbf-b9f0-49579160667e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1205 07:17:19.181755   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/old-k8s-version-445695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-rplxc" [a70f9327-cb49-4bbf-b9f0-49579160667e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003391212s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-550303 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-550303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-550303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (68.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-550303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-550303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m8.772772191s)
--- PASS: TestNetworkPlugins/group/flannel/Start (68.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-550303 "pgrep -a kubelet"
I1205 07:17:45.624930   16702 config.go:182] Loaded profile config "custom-flannel-550303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-550303 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-d8llb" [08123712-a727-4cd8-82e9-efff1a350866] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-d8llb" [08123712-a727-4cd8-82e9-efff1a350866] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005003155s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-550303 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-550303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-550303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (78.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-550303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E1205 07:18:13.808441   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/default-k8s-diff-port-336856/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:18:15.090814   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/default-k8s-diff-port-336856/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:18:17.652424   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/default-k8s-diff-port-336856/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-550303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m18.655863557s)
--- PASS: TestNetworkPlugins/group/bridge/Start (78.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-550303 "pgrep -a kubelet"
I1205 07:18:20.431322   16702 config.go:182] Loaded profile config "enable-default-cni-550303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-550303 replace --force -f testdata/netcat-deployment.yaml
I1205 07:18:20.703867   16702 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-v7jbl" [dac255ea-e3b8-41ca-84c4-2442a433c3db] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1205 07:18:22.774783   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/default-k8s-diff-port-336856/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-v7jbl" [dac255ea-e3b8-41ca-84c4-2442a433c3db] Running
E1205 07:18:27.900979   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/addons-704432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004630168s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-550303 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-550303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-550303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-902352 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-902352 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-902352 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-902352 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-902352 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-902352 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-902352 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.18s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.18s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-902352 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   commit: fae26615d717024600f131fc4fa68f9450a9ef29
iso_test.go:118:   iso_version: v1.37.0-1763503576-21924
iso_test.go:118:   kicbase_version: v0.0.48-1761985721-21837
iso_test.go:118:   minikube_version: v1.37.0
--- PASS: TestISOImage/VersionJSON (0.18s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.17s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-902352 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.17s)
E1205 07:18:53.498948   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/default-k8s-diff-port-336856/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-zppnf" [eb0e400a-b8ad-4a79-bdf0-10669d0e28cf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005651318s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-550303 "pgrep -a kubelet"
I1205 07:19:00.042844   16702 config.go:182] Loaded profile config "flannel-550303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-550303 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2x6ws" [671e27eb-b01c-47d6-9f95-f355e3c0b479] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2x6ws" [671e27eb-b01c-47d6-9f95-f355e3c0b479] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003950417s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-550303 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-550303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-550303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-550303 "pgrep -a kubelet"
I1205 07:19:32.305511   16702 config.go:182] Loaded profile config "bridge-550303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-550303 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qvwzg" [814c7072-d93a-44e4-a4b7-b9ed10ff307b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1205 07:19:34.460315   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/default-k8s-diff-port-336856/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:19:35.318775   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/old-k8s-version-445695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-qvwzg" [814c7072-d93a-44e4-a4b7-b9ed10ff307b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004192111s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-550303 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-550303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-550303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)
E1205 07:19:59.257433   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/auto-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:19:59.263846   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/auto-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:19:59.275179   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/auto-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:19:59.296617   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/auto-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:19:59.338032   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/auto-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:19:59.419517   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/auto-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:19:59.581112   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/auto-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:19:59.902868   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/auto-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:20:00.544949   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/auto-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:20:01.826320   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/auto-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:20:03.023389   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/old-k8s-version-445695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:20:04.388421   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/auto-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:20:09.510230   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/auto-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:20:19.381205   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/no-preload-516675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:20:19.751929   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/auto-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:20:37.468304   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-895947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:20:40.233809   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/auto-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:20:56.382556   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/default-k8s-diff-port-336856/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:21:21.195152   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/auto-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:21:22.697736   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/kindnet-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:21:22.704214   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/kindnet-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:21:22.715668   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/kindnet-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:21:22.737131   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/kindnet-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:21:22.778563   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/kindnet-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:21:22.860134   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/kindnet-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:21:23.021740   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/kindnet-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:21:23.343487   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/kindnet-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:21:23.985730   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/kindnet-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:21:25.267341   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/kindnet-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:21:27.828767   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/kindnet-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:21:32.950908   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/kindnet-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:21:43.192499   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/kindnet-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:21:47.449736   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/functional-158571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:22:03.674815   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/kindnet-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:22:10.578394   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/calico-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:22:10.584772   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/calico-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:22:10.596158   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/calico-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:22:10.617618   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/calico-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:22:10.659040   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/calico-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:22:10.740707   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/calico-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:22:10.902209   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/calico-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:22:11.223946   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/calico-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:22:11.865293   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/calico-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:22:13.147632   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/calico-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:22:15.709913   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/calico-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:22:20.831850   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/calico-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:22:31.073432   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/calico-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:22:43.117041   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/auto-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:22:44.637155   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/kindnet-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:22:45.901893   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/custom-flannel-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:22:45.908261   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/custom-flannel-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:22:45.919671   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/custom-flannel-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:22:45.941130   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/custom-flannel-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:22:45.982513   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/custom-flannel-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:22:46.064015   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/custom-flannel-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:22:46.225575   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/custom-flannel-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:22:46.547331   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/custom-flannel-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:22:47.189425   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/custom-flannel-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:22:48.471774   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/custom-flannel-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:22:51.034132   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/custom-flannel-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:22:51.555012   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/calico-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:22:56.156283   16702 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/custom-flannel-550303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test skip (51/437)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0.08
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0.3
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
127 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
128 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
131 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0.01
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService 0.01
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0.01
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.01
258 TestGvisorAddon 0
280 TestImageBuild 0
308 TestKicCustomNetwork 0
309 TestKicExistingNetwork 0
310 TestKicCustomSubnet 0
311 TestKicStaticIP 0
343 TestChangeNoneUser 0
346 TestScheduledStopWindows 0
348 TestSkaffold 0
350 TestInsufficientStorage 0
354 TestMissingContainerUpgrade 0
364 TestStartStop/group/disable-driver-mounts 0.19
384 TestNetworkPlugins/group/kubenet 3.54
392 TestNetworkPlugins/group/cilium 4.03
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1205 06:05:08.820866   16702 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
W1205 06:05:08.873277   16702 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
W1205 06:05:08.901399   16702 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
aaa_download_only_test.go:113: No preload image
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-704432 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-543167" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-543167
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-550303 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-550303

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-550303

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-550303

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-550303

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-550303

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-550303

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-550303

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-550303

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-550303

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-550303

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-550303

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-550303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-550303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-550303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-550303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-550303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-550303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-550303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-550303" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-550303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-550303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-550303" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-12744/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 05 Dec 2025 07:03:14 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.132:8443
name: kubernetes-upgrade-256837
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-12744/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 05 Dec 2025 07:03:30 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.218:8443
name: running-upgrade-228729
contexts:
- context:
cluster: kubernetes-upgrade-256837
extensions:
- extension:
last-update: Fri, 05 Dec 2025 07:03:14 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-256837
name: kubernetes-upgrade-256837
- context:
cluster: running-upgrade-228729
user: running-upgrade-228729
name: running-upgrade-228729
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-256837
user:
client-certificate: /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/kubernetes-upgrade-256837/client.crt
client-key: /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/kubernetes-upgrade-256837/client.key
- name: running-upgrade-228729
user:
client-certificate: /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/running-upgrade-228729/client.crt
client-key: /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/running-upgrade-228729/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-550303

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-550303"

                                                
                                                
----------------------- debugLogs end: kubenet-550303 [took: 3.368415628s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-550303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-550303
--- SKIP: TestNetworkPlugins/group/kubenet (3.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-550303 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-550303

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-550303

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-550303

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-550303

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-550303

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-550303

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-550303

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-550303

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-550303

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-550303

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-550303

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-550303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-550303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-550303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-550303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-550303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-550303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-550303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-550303" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-550303

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-550303

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-550303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-550303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-550303

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-550303

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-550303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-550303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-550303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-550303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-550303" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-12744/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 05 Dec 2025 07:03:14 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.132:8443
name: kubernetes-upgrade-256837
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-12744/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 05 Dec 2025 07:03:30 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.218:8443
name: running-upgrade-228729
contexts:
- context:
cluster: kubernetes-upgrade-256837
extensions:
- extension:
last-update: Fri, 05 Dec 2025 07:03:14 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-256837
name: kubernetes-upgrade-256837
- context:
cluster: running-upgrade-228729
user: running-upgrade-228729
name: running-upgrade-228729
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-256837
user:
client-certificate: /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/kubernetes-upgrade-256837/client.crt
client-key: /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/kubernetes-upgrade-256837/client.key
- name: running-upgrade-228729
user:
client-certificate: /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/running-upgrade-228729/client.crt
client-key: /home/jenkins/minikube-integration/21997-12744/.minikube/profiles/running-upgrade-228729/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-550303

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-550303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-550303"

                                                
                                                
----------------------- debugLogs end: cilium-550303 [took: 3.855703115s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-550303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-550303
--- SKIP: TestNetworkPlugins/group/cilium (4.03s)

                                                
                                    
Copied to clipboard